HSJ’s Performance Watch expert briefing is our fortnightly newsletter on the most pressing performance matters troubling system leaders. This week by bureau chief Ben Clover.

An iceberg of patient harm

One of the great known unknowns of the acute sector is: how many patients have fallen off our waiting lists, their treatment status unknown?

The concerning absence of data on this goes hand-in-hand with the NHS being unable to keep official track of waiting times for follow-up appointments. These can be no less important and we don’t record waiting times for them.

The overall decline in the service’s elective waiting time standard — that 92 per cent of patients should have waited no longer than 18 weeks — is well known and it has been falling for some time.

But the third and scariest known unknown is the amount of human suffering caused by these problems.

People do come to harm as a result of delayed appointments and missed follow-ups but we have no national picture of how much harm has been done to how many people.

The past week has seen some worrying glimpses into the fog surrounding these questions.

The first thing to say is that even the data on the official waiting lists — the information used to work out the performance against the 18-week target — is not great.

The National Audit Office work on accuracy in 2014 found only 43 per cent of records were fully documented and correct.

So the picture we see each month might be a fair bit different to the reality. And that is before you factor in the large organisations with waiting list issues so severe they do not even attempt to report their waiting times to the centre.

Prove it

To go back to known unknown number one; a trust with a well-functioning patient record should be able to look up the record of any given patient and tell you when their treatment started, how long they waited, who they saw, what happened, and when any follow-up is scheduled.

When trusts can’t do this, because the data has been recorded inaccurately or not at all, it doesn’t always mean the patient hasn’t had their treatment/follow-up — it just means you can’t prove it.

A department might have kept track of their patients but just not in a way that fits with the system you are supposed to submit your data to. A system like this is obviously vulnerable to screw-ups within the directorate, and will be overly reliant on one consultant’s way of doing things. If they are off for any length of time, or cut back their hours because of, say, the pensions crisis, things can fall apart quite quickly.

If you can’t prove it, you can either check it — a very laborious process (St George’s in south London had to examine more than two million bits of data as it tried to repair its records). Alternatively, hospitals can trust that people got what they needed and they or their GPs will follow-up with you if they didn’t (although, by then, it could be too late).

To take the St George’s example, once you’ve sifted through that huge pile of data and reconciled treatment codes and departmental idiosyncracies, you should be left with two piles: people who were treated but it wasn’t recorded, and people who weren’t treated and it wasn’t recorded.

It is obviously this second group that might have come to avoidable harm.

They need to be sorted into people who urgently need their treatment and those who don’t.

Both of these groups need to be assessed to see if they came to harm as a result of missed treatment, and if so how much.

Performance Watch has not spoken to many people close to the running of a harm review process who were very impressed with it.

It divides people into moderate, severe harm and death and, because it’s all conducted on a trust-by-trust basis as they become aware of the issue and do something about it, there is precious little uniformity in how the process is run. Trusts largely mark their own homework on something you might expect transparency on.

Most harm reviews are on patients who have waited more than 52 weeks. That’s it. Much more risk exists in other areas such as opthalmology follow-ups, gastroenterology cancer surveillance and deaths of patients already on the waiting list, but there is no clear national approach to this.

This is one of the surprising things about harm review processes affecting many hundreds of people — they seem to be left to trusts with very little scrutiny from NHS England/Improvement or sustainability and transformation partnerships.

King’s College Hospital Foundation Trust was doing a harm review assessment of 8,500 people after it lost track of endoscopy patients and their treatment was delayed.

Three people died after delayed cancer treatment at this trust. It could be many more once the full harm review process is completed.

How many have died across England in the last year of delays like this? A known unknown. 

Going blind

How many patients have come to harm as a result of missed follow-up appointments?

We don’t know but a report from the Getting It Right First Time programme this week sheds a little light.

Ophthalmology is a specialty where delays in treatment and missed follow-ups leave people blind or with reduced vision.

How many?

A study from the British Ophthalmological Surveillance Unit in 2017 examined patients over a twelve-month period in 2015 and found 132 “experienced permanent deterioration” and “42 were registered as severely sight impaired or sight impaired”.

The GIRFT report into ophthalmology released last week said 21,500 glaucoma patients had had their follow-up delayed over the past year. How long the delay was is something only the trusts involved know, and we can’t say for certain that they know. How much harm did patients come to? Only trusts that have carried out harm reviews know.

The Royal College said this was a “major concern” because glaucoma was the most common reason for loss of vision.

ASI SIs?

This whole iceberg of suffering is before we even get to “appointment slot issues”.

This is when the NHS central e-referral service for booking treatment slots can offer no appointment to a patient that fits their criteria. When this happens, the patient and their need are recorded as an “appointment slot issue”.

Obviously, managing ASI patients is key to making sure they are treated in a timely manner and are not harmed by a lack of treatment.

But read the official guidance and note that “referrals will be removed from the worklist after 180 days if no action is taken”.

How many patients have been automatically removed from the working list after 180 days?

Performance Watch understands that no one knows.

Another dangerous known unknown in a system already full of them.

NHSE owes it to patients to discover and reveal the true extent of missing patients, overdue follow-ups and the harm that has come from them before changing the key metric – a move proposed as part of NHSE’s on-going standards review. A new target will likely be consulted on in 2020 under the NHSE proposals.

How far is the English system away from the one in Northern Ireland?