How can Sir Bruce Keogh trump accusations that the review of high mortality rates will be a whitewash, and is there any outcome that will satisfy the critics?

Followers on Twitter of HSJ reporter Ben Clover will be aware that Sir Bruce Keogh’s review of high mortality rates in 14 NHS trusts is in full swing. Two weeks ago peer review inspections took place at Basildon and Thurrock and Medway, with more to follow. Yet doubters are already contacting HSJ fearing that Sir Bruce’s inquiry will be the customary whitewash; an accusation that is often made after NHS investigations that don’t see heads roll or blame apportioned. 

Plan of action

Sir Bruce cannot have been more transparent about his plans and intentions for this review (much more so than other NHS England reviews, as End Game noted in the 10 May issue of HSJ). He has a web page on NHS Choices, an esteemed advisory board and has even announced the dates of each inspection visit. But his biggest fear must be what will happen if he does not find anything.

‘Sir Bruce Keogh’s biggest fear must be what will happen if he does not find anything’

Of course, I hope failings to patients are not found in any of the hospitals involved. But if they are I have no doubt the combined might of the local clinical commissioning group, the Care Quality Commission, Monitor, the Trust Development Agency and NHS England will ensure processes and outcomes are improved, changes are made and performance monitored.

Any trust with a high mortality rate will know that the cause is somewhere on the spectrum from a problem with the data to a problem with the quality of care. The key is for clinicians at each trust to understand where and why for their particular service and that particular alert. And if at the data end, is this a false positive, accidental coding error or more suspicious coding manipulation? Too often, this coding argument, the ‘we are special and different defence’ has been accepted as the underlying cause of the high rate.

Looking to fail?

So what will Sir Bruce do if no problems are found?

Well he could send the inspection teams back in or appoint new and tougher inspectors (some of the “whitewash” accusers are ex inspectors hopeful of being brought in from the cold). But as former Healthcare Commission chair Sir Ian Kennedy used to say, the longer you examine a hospital the more “failure” you are likely to find.

He could calmly announce that there was “nothing to see here”: but where does that leave the current ways of measuring mortality rates? The sceptics will seize on this as proof that they have no efficacy, the champions will cite Mid Staffs over and over again, the publication of speciality level mortality rates may well be delayed and the patient will, as ever, be confused about the safety of their local hospital.

He will also then have to deal with the “whitewash” clamour as the local versions of Cure the NHS that have sprung up around most of the 14 call for his head.

So how can Keogh and team mitigate for such an eventuality?

What if it’s coding?

Well in this case he must be as open as possible and explain the false positives in layman terms that patients can understand, no p-values and funnel plots but simple graphics and clear explanations that people can comprehend and trust. He needs to be precise as to what ‘an artefact of coding’ means and reassure patients.

What if it’s data quality?

If so he can take no prisoners. This should be considered almost as serious as care failings. This will be backed up by the new Health and Social Care Act (which will introduce a new criminal offence on providers who supply false or misleading information). If trusts have not paid enough attention to data quality and this has triggered an inspection there should be serious consequences, not just vague action plans and promises of a review in a year’s time.

What if it’s methodology?

If Sir Bruce finds quirks in the way mortality rates have been calculated he must fix this even if it disrupts the consensus way in which the summary hospital-level mortality indicator (SHMI) was devised. He must also instruct keepers of other methodologies like the hospital standardised mortality ratio (HSMR) or the risk-adjusted mortality index to do the same. 

Finally he needs to make his findings and all underlying evidence public and advise clearly on how the SHMI and HSMR should be used to assess safety and quality. If he doesn’t do this there is a risk this will cause a big crisis for the outcomes debate – in the future will we be able to rely on mortality rates to spot possible failure, if not why measure them?

Its lose-lose for Sir Bruce, he can’t be hoping for failings, but passings might be worse.

Alex Kafetz is a director at ZPB