Patients know what needs to improve in their local hospital as well as anyone. New quality measures should be based on their input, says Mike Birtwistle

“As soon as a measure becomes a target, it ceases to be a good measure.” That simple assertion − a reflection of Goodhart’s law − underlines the inherent weakness in a system of aggregate ratings that will ultimately undermine it. For however much the government insists that any such system it establishes will be “a measure, not a target”, the truth is that it will become one.

‘Few have been surprised by the hospitals which score very poorly − a pattern of quality failure is emerging and must surely be acted upon’

Whatever the method of improvement − be it a target, an incentive or an accountability measure − if you don’t choose your issue carefully you can end up with all sorts of perverse behaviours to deliver the result you have said you want to achieve.

This is as true for health services as it is for banking. You need to be careful what you ask for, because if you ask the wrong thing you might just get it. Even if you ask for the right thing, the people or organisations you are asking might prioritise it to the exclusion of pretty much everything else.

Today, MHP Health Mandate has published its quality index project, which provides the first ever overall assessment of hospital quality, weighted according to the issues which are priorities for the public. The index is a prototype, designed to stimulate and inform discussion rather than provide a definitive verdict on quality.

Knowing nods

The findings have led to more knowing nods than raised eyebrows. Overall, quality appears to be better in foundation trusts than in non-foundation trusts, and significantly weaker in London than it is elsewhere. 

Few have been surprised by the hospitals which score very poorly − a pattern of quality failure is emerging and must surely be acted upon. Health watchers have a pretty good understanding of where the problems are. The challenge is translating that into a public desire for change.

Quality problems are not necessarily in hospitals with higher mortality rates. There does not appear to be much of a correlation between performance on wider quality issues and and the “Keogh 14” under investigation.

‘Mid Staffs shows quite how badly things can go wrong when gaming distorts priorities’

What was more interesting was the reaction of the NHS. Last week, we alerted trusts to our publication. The reaction of some − which I will not name − was instructive. Most did not respond. Those that did respond and who were near the top said simply “thank you” − some effusively.  Those that responded who were near the bottom sought to understand our methodology.

I am sure the latter were doing so to understand what they might be able to improve. But if our index was adopted by the government for its new “aggregate ratings”, and routinely used to measure the quality of hospitals, every trust would be poring over our methodology.

Moving target

In a short space of time they would know what interventions would have the greatest impact on their ranking − and they would be investing time and effort in making them happen. This is not a bad thing; it would lead to improvement. But in time they would begin to focus disproportionately on the handful of measures which influenced the ranking the most, potentially at the exclusion of other issues. This would be very bad indeed.

This is not an abstract concept. It is what happened at Mid Staffordshire Foundation Trust, which shows quite how badly things can go wrong when gaming distorts priorities.

‘If you set a test then over time people will work out how to do well at it. That is human nature’

The trust managed to score three stars for two years in succession, despite the concerns of local commissioners. When star ratings were abolished, the board similarly focused on the “measures” it needed to perform well in to achieve foundation trust status − work which resulted in what Robert Francis called its “erroneous authorisation”.

This did not show that either of these systems was ill intentioned, or necessarily inaccurate, merely that a disproportionate focus on them could lead to achievement at the expense of wider quality.

As soon as any measure becomes a tool of regulation it becomes unreliable. That is as true among hospitals as it is among the bankers who sought to influence LIBOR lending rates. If you set a test then over time people will work out how to do well at it. That is human nature. 

This, however, should not be a reason to give up. Instead we need to find a way to make sure the measure used in aggregate ratings never becomes a target. Our quality index offers a simple solution: keep the target moving.

What matters to patients

Here is where the importance of weighting the measure according to factors outside the control of providers comes in − in our case, the priorities of the public. There is no reason to presume these preferences will be static. Indeed, evidence from public opinion polling suggests that as issues are addressed, their relative importance to the public diminishes.

‘Hospitals treat patients; trusting them to design quality ratings could have many benefits’

Nor could the exact weighting of priorities be predicted by providers. Genuinely focusing on the issues which polls tell hospitals to focus on would not only provide more meaningful and relevant aggregate ratings, it would also be an important safeguard against gaming the system.

A hospital with long waiting times would have an incentive to tackle them, but only while they remained long. As they shortened, other factors would come into play.

There are two caveats here. The first is that any such weighting should focus on the results of polling patients, rather than the general public. Patients are, after all, the “customers” of hospitals and those best able to tell hospitals what they need to improve. Our report shows just how vividly the views of patients and the wider public might diverge. The second is that to have an impact on local providers, the polls would need to cover local patients only.

This would be costly. But the government is investing many millions every year in the new “friends and family test”. That money could instead be used to ask: “What matters most to you in determining whether you would choose this hospital?” And offer patients a list of variables on which data are already collected.

That information could be used to design local “quality ratings”, published nationally, showing just how good a job every local trust has done in the issues which matter most to its local patients. Hospitals treat patients; trusting them to design their quality ratings could have many benefits.

Mike Birtwistle is managing director at MHP Health Mandate