The public do not act on mortality rates to work out how best to access care. Solving this remains a challenge, and one that will require all the information we can gather and more, writes Roger Taylor and Paul Aylin
When Dr Foster first published the hospital mortality ratios calculated by Imperial College London we were accused by one doctor of acting like a terrorist organisation. The debate is calmer today but the issue remains hotly contested.
In March, the BBC broadcasted a radio programme which argued that standardised mortality ratios are unreliable. File on Four included advice from Nick Black, professor of health services research at the London School of Hygiene and Tropical Medicine, that the public should ignore the figures.
‘Claims that mortality ratios do not correlate with other measures of quality are debunked’
Today, Dr Foster and Imperial College publish our response. In our report we examine in turn each of the statements made. Claims that mortality ratios do not correlate with other measures of quality are debunked. Claims that audit data are more accurate than routine data are found to be inaccurate. Claims about the superiority of measures of avoidable deaths are refuted.
The issue of “avoidable” or “preventable” deaths is explored in some detail.
These terms refer to patients where a specific act or omissions contributed to their death. Measures of avoidable mortality are derived by reviewing case notes to identify such acts or omissions.
- Problem solved: Give patients control of their data
- The NHS cannot force insurers to delete patient data
- Make the missing link between quality, workforce and data
- 18 week waits, May 2014: explore the maps
But the natural concept of avoidability is broader than that. For example, we know that, if you have a heart attack, the longer it takes to unblock the blood vessels, the more likely you are to die.
We can say with some certainty that among patients who died after delayed treatment, some would have survived if treated more promptly. But there is no set time at which a death becomes avoidable.
If we look through the case notes of these patients we will not be able to identify those who would have survived if treated more quickly. Mortality ratios, which compare how many patients died with the number that would have been expected given their risk profile, do capture these differences.
‘Avoidable mortality measures are a welcome additional tool. But they are no more than that’
Given that, and given the inherent unreliability of case note reviews, it is peculiar that avoidable mortality measures should be promoted as a replacement to mortality ratios. Avoidable mortality measures are a welcome additional tool. But they are no more than that.
Why do debates about measurement in healthcare become so polarized? Why do people make extreme claims – that audit data is better than routine data or that process measures are better than outcomes – when the right answer is almost always to combine the best of each?
Find the control
One reason is that these arguments have become proxy wars for completely different, more political arguments.
If we compare audit data and administrative data it is quickly plain that in some situations audit data has advantages, but in many others, administrative data is superior.
There is no consistent difference between them apart from one thing: the issue of control. Audit data areas are controlled by clinicians, administrative data by managers.
‘Healthcare is here to deliver outcomes for patients, not processes’
Claims made about audit data are more to do with clinical autonomy than with data quality. But in a world in which patient records are being computerised, and clinicians have no choice but to work with managers to make best use of limited resources, attempts to defend isolated data domains under the control of clinical communities are doomed to fail.
If we compare process and outcome measures, it is clear that performance on process measures can give as misleading an impression of quality as performance on outcomes. The difference is not that one approach enables a more accurate understanding of quality. The difference is that process measures are easier to work with.
If performance on a process measure is poor, it is clear what needs to be done. If performance is poor on an outcome metric, working out how to respond is much harder. This makes life difficult for both managers and clinicians. But in the end, healthcare is here to deliver outcomes for patients, not processes.
Use flawed data intelligently
Perhaps the most contentious area of conflict is the question of publication and the fear that the media or the public will put too much weight on particular indicators and create a distorted picture.
Every effort needs to be taken to acknowledge the limitations of available data and to use flawed information intelligently. But we will not fix this problem by reducing the information available to us, closing down avenues of investigation or rejecting any piece of data that can enable us to better identify and tackle variations in quality.
Professor Black advises the public to ignore mortality rates. It is bad advice, but also pointless.
‘We have still not found ways of providing the public with information in forms they can use to get the best quality care’
The public do not act on mortality rates, despite the fact that there are instances where by doing so they would have been able to access superior care and quite likely live longer. Our problem is not that the public over-interpret information.
Our problem is that we have still not found ways of providing them with information and advice in forms they can use to help them get the best quality care available. Solving that will require us to use all the information we currently have and more.
Roger Taylor is director of research and public affairs at Dr Foster. Dr Paul Aylin is a clinical reader in epidemiology and public health, and co-director of the Dr Foster Unit at Imperial College London