Anyone picking up last week's Daily Telegraph will have seen the shock-horror headline on hospital mortality rates. This was typical media hype that did nothing to assist the discussion about encouraging improvement in the NHS, nor how we reduce inequalities across the country.

Anyone picking up last week's Daily Telegraph will have seen the shock-horror headline on hospital mortality rates. This was typical media hype that did nothing to assist the discussion about encouraging improvement in the NHS, nor how we reduce inequalities across the country.

Deaths in hospital are rare, running at about 2.0 per cent in 2005/6. The figures quoted in the article show variation across the country of between about 1.5 and 3.0 per cent at hospital level. Of course we should not be complacent about such variation, if it is avoidable, but these figures provide a rather better sense of scale than the quoted mortality rates.

The main issue is whether the deaths are avoidable or whether they represent the certain inevitability of the end of life and all that is being reported is that these are distributed unevenly between hospitals. If this is indeed the case, then conclusions about the quality of care of the 'bad' hospitals may be completely erroneous.

Flaws in.the system

To answer this point we have to get behind the methodology that is used to calculate the 'expected' figure which defines a hospital's position in the 'league table' and whether the authors perceive it as 'good' or 'bad'. Two basic requirements are needed to produce a valid 'expected' figure; high quality data, and a valid and complete statistical model. With respect to the data, the medical director of the George Eliot Trust highlights the problems of making valid comparisons when, with varying accuracy, trusts record different amounts of information about diagnoses, co-morbidities etc. on their computer systems (as opposed to what is recorded in the patients' notes).

However, of equal, if not greater importance, is the quality of the model, and here the Dr Foster methodology is found wanting. The use of a limited number of diagnoses (about 80) within the analysis is unnecessarily restrictive. These selective diagnoses are heavily biased towards cancer and heart and circulatory conditions and occur with very different frequency across hospitals - a previous examination has shown that they account for anywhere between 13 and 42 per cent of admitted patients. Dr Foster points out that these lead to about 80 per cent of all deaths across England, but again at hospital level, these figures vary substantially, the diagnoses accounting for between 46 per cent and 86 per cent of all deaths occurring in hospital. So in other words, hospitals are being assessed on only a part of their activity, and a part which favours some hospitals more than others.

Lack of evidence

Finally, the irresponsible quote made without any evidence that sought to link cause of death, and by implication the higher mortality rate at some hospitals, with 'medical error, infection and failure to deliver quality of care' is entirely inappropriate. As the researchers no doubt know, there are many other reasons why patients might die in hospital, including the severity of the illness on admission, the level of provision of alternative facilities in the community etc. Without evidence, the suggestion that medical errors and infections lead to a high death rate in a particular hospital is highly inappropriate and contentious.

The Department of Health's spokesman seems to have got it spot on saying that 'it is impossible to condense into one number the entire performance of a hospital in a way comparable with every other hospital in the country'. Rather than pressurising hospitals with a dubious, sensationalist approach, it has been shown that a far more productive method of improving quality in the NHS is to work with trusts over a period of time to help them better analyse their performance across the full range of their activities.

Graham Harries, Chief Executive, CHKS