Performance Watch is HSJ’s fortnightly expert briefing on the most pressing performance matters troubling system leaders. Contact me in confidence here.

A major national review due later this year is set to provide vital insights about emergency department variation – just around the time the debate over whether to ditch the four-hour target will be reaching the business end.

And the review’s co-leads gave an insightful sneak preview at a conference held by the King’s Fund earlier this month where they set out some interim findings and the direction of travel of their report.

As HSJ reported today, the Getting It Right First Time urgent and emergency care workstream will also raise wider issues around accident and emergency management, and pose “very challenging questions” for some of the NHS’ prestigious teaching hospitals and their poor four-hour performance.

The review is being overseen by the Royal College of Emergency Medicine’s vice president Chris Moulton and NHS England’s emergency care clinical lead Cliff Mann. They have already visited 50 EDs and are due to visit the remaining 126 English A&Es before the report is finalised.

The analysis they presented around the huge variation in admission ratios and case mix is worth emphasising, considering the debate around potentially having a new universal standard.

The “conversion rate”, the number of attendances resulting in admissions, ranged between 16 to 44 per cent across A&Es, with a national mean around 30 per cent, the GIRFT review has found. Admissions via A&E made up between 35 per cent at some hospitals and all of them for others.

Asked about whether the four-hour target should be ditched, Dr Mann neither advocated axing nor keeping the standard completely, but gave an interesting albeit nuanced answer.

He argued that considering the huge variation in what trusts are dealing with, a “one-size-fits-approach” stand-alone target was no longer appropriate.

A single headline measure is also too easily gamed, and the numerous changes in clinical pathways since its introduction in 2004 mean a more sophisticated set of metrics is required, he said.

Such a move would be a huge challenge, given the failures of previous efforts to introduce a suite of measures in 2010 (the clinical standards indicators) and 2017 (NHSI’s A&E scorecard).

Dr Mann said NHS England’s clinical review of standards pilots, which began this month, would provide vital learning to the debate, but he indicated some metrics he would be supportive of.

The aggregated patient delay – the new metric is designed to present a more holistic measure of A&E performance – was proving a useful benchmark on which to compare EDs, he said.

The APD is calculated by adding up all delays past a breach target. The target does not, however, have to be four hours, he said.

As previously reported, Dr Mann suggested a six-hour ‘zero breaches’ backstop could be established (although this does not preclude a four-hour target with a lower threshold also being in place).

He also talked about the value of calculating the “avoidable lost time” between when the patient was ready for admission and when they were actually admitted to the ward. This was a robust indicator of how efficiently the ED was working with other parts of the hospital, he said.

The review will, however, be far wider than addressing the best headline measures. Other themes they have been grappling with so far include:

  • The IT burden: In some trusts, it takes 3 minutes to enter a patient’s data into the electronic patient record in some departments and up to 12 in others. GIRFT has calculated each extra minute equates to a quarter of a million hours per year or 174 doctor shifts per ED.
  • Is the NHS investing in building new EDs in the right places? Dr Mann argued the data suggests a lot of capital investment should have been put elsewhere.
  • Do trusts focus enough resource on their A&Es? The average ED accounts for 4 per cent of a trust’s expenditure, but on average more than 70 per cent of acute admissions come via the ED.

The next step in the debate around reform of the performance metrics will be when the pilots complete their rapid testing of new metrics over the summer. There will be much the two programmes, both being run relatively separately, will be able to learn from one and other about how best to reform the targets.

Explainer: What is the aggregated patient delay metric?

The APD metric measures the total time the average admitted patient who has breached the four-hour target spends in the emergency department before being placed in a bed.

It was developed as a supplementary metric to the four-hour standard to help address some of the perverse behaviours driven by the core A&E standard, and to help understand and improve patient flow and clinical outcomes, according to its inventor, Dr Moulton.

The APD only measures patients who have breached four hours, so the long-standing waiting time benchmark represents the foundation on which the newer metric was developed, Dr Moulton said.

The metric was intended, among other things, to be used in part to address the “3:55 rush”, where departments sometimes focus on patients about to breach the four hour mark, potentially at the expense of other patients who have already waited longer than four hours.

There is no difference under the existing system on the headline target whether a patient breaches by 1 minute or 10 hours – they are still marked down as breach. This has long troubled both clinicians and managers.

GIRFT data shows the average time spent by a patient admitted to A&E who has breached four hours was around eight hours, HSJ understands.

If a hospital recorded no breaches in its A&E, it would record an APD score of zero. The APD is expressed as the number of hours per 100 patients to calculate an average score and account for outliers.