Dr Foster’s Hospital Guide was no help to managers and patients - in fact it added to the confusion

Stephen Thornton on interpreting information

Stephen Thornton on interpreting information

Stephen Thornton on interpreting information

The debate around publishing performance data, hospital standardised mortality ratios and patient safety data is full of controversy. It hit the headlines again at the close of last year with the publication of the Dr Foster Hospital Guide 2009.

The Health Foundation supports publication of comparative data on performance, but only when done in the right way and for the right audience. However, recent events show there is still much to learn about how best to do this.

Research commissioned by the Health Foundation, Does Public Release of Performance Results Improve Quality of Care?, found that it motivates a push for improvement, by harnessing the competitive instincts of doctors and managers. Hospitals also claimed that concern for public image is a big motivator.

However, the research found little evidence that public reporting of health outcomes affects patients’ choice of provider. According to the 2009 national patient choice survey, patients base their decisions on GP recommendations and the experience of friends and family. Only 5 per cent cited using NHS Choices and 1 per cent other websites.

These are the grounds on which the Health Foundation believes Dr Foster Intelligence to have inappropriately packaged data - best suited to an audience of health professionals and managers - for the general public in its Hospital Guide. Contrary to the view expressed by National Patient Safety Agency chief executive Martin Fletcher in his foreword to the guide, the Health Foundation does not believe it “helps to equip patients and the public to better understand safety in their hospitals”. On the contrary, the inevitable flurry of publicity surrounding its publication added to public confusion at best and heightened fears at worst.

It would have been better had the Hospital Guide been written explicitly for the boards of hospitals. It is they who need to lead the development of a safety culture of openness and learning. It is they who must place themselves at the heart of their hospital, reviewing data, walking through the wards and removing barriers to enable clinicians and managers to focus on improving safety.

Shift the balance

Dr Foster should have presented the data in a more informative and balanced manner. In order for the reader to be clear just how reliable each piece of data was, confidence intervals should have been explicitly used. Aggregate ratings are unhelpful and misleading. Instead the material should have been presented in the form of a balanced scorecard, as used by hospitals in the Health Foundation’s Safer Patients Initiative to analyse their data.

The Hospital Guide relied heavily on hospital standardised mortality ratios without any explicit recognition of the degree to which such measures and their interpretation remain highly contested.

Research commissioned by NHS West Midlands and conducted by Birmingham University concluded that the ratios are not an effective indicator of the quality of care in hospitals. It also found there was no evidence to suggest that hospitals with high standardised mortality ratio were failing across the board and identified a methodological error in the construction of the ratios reported by Dr Foster. Conversely, writing in the BMJ, Jane Feinmann cites Walsall Hospitals Trust as a shining example of how the measure can be used to instigate large scale organisational improvement.

While it is valuable that Dr Foster has raised the profile of patient safety, the information provider should have been open about this debate.

This data should not have been presented as beyond question and used to fuel sensationalist media headlines. Measures should be developed that are appropriate to the needs of the healthcare professionals who will be using them to improve their care and the areas of medicine which are being measured.

These events should be used as a learning tool by the national quality board. It must agree what needs to be done to achieve greater alignment between organisations within, and working with, the NHS, when compiling and presenting performance data. It must also commission an investigation into the effectiveness of the hospital standardised mortality ratio so that definitive guidance can be issued on what conclusions can be drawn from the score.

Finally it must be remembered that rankings are just rankings. Whatever measure you choose to assess hospitals against, if you rank them there will be those at the top and those at the bottom. This in itself tells us nothing about underperformance. Instead, it simply identifies the fact that there is variation - and we already know that exists.

Wherever a particular trust may have come in this exercise, it will still have plenty of opportunity to work to decrease the incidence of error and harm and to improve safety for its patients. The Health Foundation continues to build knowledge about safety through its Safer Patients Network of 18 leading edge trusts and to support the Patient Safety First Campaign. If all trusts in England were to engage actively in the campaign by implementing and reporting data on all five of its recommended interventions, they would all improve their safety record, wherever their place in the Dr Foster rankings.