There is suspicion that some hospitals might have purposely delayed waiting list validation in order to game the 18 week target. But the evidence suggests that late validation (for any reason) is not widespread, so deliberate late validation must be rarer, says Rob Findlay.

Rob findlay cutout 3x2

 

NHS waiting lists are notoriously error prone, sometimes because events have happened in patients’ lives while they were waiting, and sometimes because of administrative mistakes within the service.

The process of checking the accuracy of a waiting list – known as validation – is a costly administrative overhead, and one that is getting more costly as the waiting list grows. But it is essential.

If a waiting list data error causes a hospital to book clinical capacity for someone who is not really waiting, then that valuable capacity is likely to end up being wasted when the patient fails to show up. At the very least there will be a scramble to book someone else when the mistake is realised.

Waiting list errors can make a hospital’s waiting times performance look better than it really is, and this leads to concerns that some hospitals might be gaming the target by deliberately allowing such errors to persist

Unfortunately, regular validation processes aren’t always as effective as they should be, which I usually put down to chaos rather than conspiracy. However, a whiff of conspiracy lingers because of a quirk in the way the referral to treatment waiting times target works. 

This quirk means that waiting list errors can make a hospital’s waiting times performance look better than it really is, and this leads to concerns that some hospitals might be gaming the target by deliberately allowing such errors to persist.

I am sure it has happened occasionally, but is it widespread? Perhaps the national waiting times data can give us some clues? Let’s start by looking at how the quirk works, and then we can look at the numbers and see whether those suspicions might be justified.

How to game the target

The main RTT target is that 92 per cent of the waiting list should have waited less than 18 weeks. Usually waiting list records are routinely validated by the hospital as each patient passes the 12-15 weeks RTT mark, to make sure they are genuinely still waiting before being offered treatment.

Deliberate late validation in order to game the target would be a serious matter

Now, imagine a hypothetical hospital which decides not to validate any patients until their waiting time is very close to 18 weeks. This inflates the apparent number of under 18 week waiters, because all the erroneous waiting list records are still in the data. On the other hand, the number of over 18 week waiters remains much the same because all those patients were validated as they passed the 18 week mark.

What happens when this hospital reports its performance against the 18 week target? Performance is calculated by dividing the number of under 18 week waiters by the total number waiting, and the inflated under 18-week number causes apparently better performance. It could even be enough to tip the hospital over “92 per cent within 18 weeks”, and apparently achieve this important target when it would otherwise be breached.

In case this all sounds rather clever, consider this. If this hospital genuinely does not know who is waiting for care and who isn’t, then it runs the risk of wasting clinical capacity by booking patients who do not need it; so productivity falls, waiting times rise, and the games with validation are self-defeating. The alternative is that somehow the hospital does know who is really waiting and who isn’t, in which case they are falsifying their waiting list figures.

Either way, deliberate late validation in order to game the target would be a serious matter.

Is it widespread?

Personally, I do not believe this practice has ever been widespread. Where late validation does occur, it might be for the perfectly good reason that validation often happens shortly before patients are scheduled, and as waiting times grow this is happening closer to 18 weeks than before.

Financial penalties have now been suspended from the 18 week waiting times target in England, meaning there are broad opportunities for trusts to improve their waiting list data without fear of penalty, including improving validation. 

Nevertheless, it would be interesting to check whether late validation was widespread when the penalties still applied. We cannot know what motivated late validation in any particular hospital, but if late validation of any kind turns out to be rare, then deliberate late validation must be even rarer.

So, I have scanned the English RTT data for services where the waiting list (incomplete pathways in RTT jargon) suddenly drops at 18 weeks, but where surprisingly few patients completed their pathways (ie were treated or discharged) with similar waiting times. The chart below shows the sort of thing we are looking for.

English RTT data for services

English RTT data for services

English RTT data for services

(The exact method is: when scanning the data, I took the incomplete pathway cohorts from >14-18 weeks at the end of January (the pale dotted line in the chart above), deducted those whose pathways completed during February (using the red and orange columns from the >14-22 week range in the chart above, making reasonable assumptions about how the data should be apportioned), and this produced an estimate of how those waiting list cohorts should look at the end of February by which time the patients would be waiting >18-22 weeks.

February was a good month for this kind of comparison because it was exactly four weeks long. Finally, I compared the result with the published >18-22 week cohorts at the end of February (the dark dotted line above) to see if there was a mismatch that might be caused by late validation.)

I classified each service as either:

  • Late validation “Not detected”, or
  • Late validation “Suggested” (where the expected cohorts are at least 2x the reported size, and the anomaly is at least 10 patients per weekly cohort), or
  • “Indicated” (3x and at least 15), or
  • “Strongly indicated” (4x and at least 20).

After scanning 4,467 English specialties within trusts, where there was enough data to work with (excluding the “All specialties” specialty), the numbers in each category were:

  • 3,729 services (83 per cent) where late validation was “Not detected”,
  • 533 services (12 per cent) where late validation was “Suggested”,
  • 143 services (3 per cent) where late validation was “Indicated”,
  • 62 services (1 per cent) where late validation was “Strongly indicated”.

Looking at the “indicated” and “strongly indicated” categories, I conclude that late validation (for any reason) was rare, and deliberate late validation must, therefore, have been even rarer and certainly not widespread.

Detailed results

The results for each local specialty are included in my regular map of waiting times across England, which is here for February.

The Tableau Public interface for the map allows you to search the data for specific categories. For instance, you can use the “Late validation” drop down to select “Indicated” and “Strongly indicated”, use the “Specialty” drop down to select “(All)”, and then select the 14-18 week waiting time group at the top. Then the map will show local services that narrowly achieved 18 weeks and where late validation is at least indicated.

If you click any point on the map, it will take you to a fuller analysis, including a chart like the one above.

Without wanting to labour the point, I will say again that an indication of late validation is not an accusation that this is being done deliberately in order to game the target. The point of this exercise is to establish whether late validation for any reason is widespread (it isn’t) so as to put an upper limit on the possible prevalence of deliberate late validation.

This is also an opportunity to understand better the positive reasons for late validation. So if you manage a service where late validation has been indicated, and wanted to drop me a line about why it is happening for good reasons, that would be much appreciated.

Dr Rob Findlay is director of Gooroo Ltd, which specialises in planning software for non-elective and elective capacity, theatre productivity, and cancer and elective waiting times.