How useful are clinical performance measures? And will they prevent such tragedies as the Bristol baby deaths case? John Appleby reports

The public inquiry set up in the wake of the deaths of babies at United Bristol Healthcare trust will consider some of the wider issues of how to prevent such tragedies in future.

Already the Department of Health has announced that, regardless of the results of the public inquiry, it intends to publish a range of clinical performance measures for every trust in the country. These will include operative death rates. In addition, the DoH also intends to beef up clinical audit.

The four national confidential inquiries - such as the National Confidential Enquiry into Perioperative Deaths (NCEPOD) - are currently voluntary. The DoH not only intends to make these audits compulsory from next year, but also to require every hospital doctor to participate in professional audit of their own work.

It is clear that the Bristol tragedy has dented the public's confidence in the NHS and the care it provides. It is also clear that the government (and the NHS and, most particularly, the medical profession) have to be seen to respond positively. However, two questions (at least) will have to be addressed by the public inquiry headed by Professor Ian Kennedy: what information is needed to monitor clinical performance, and how should such information be used?

Much of the basic information that could be used to measure and monitor clinical performance has been in existence for years. The Hospital Episode Statistics (HES) contain information on deaths in hospital at an individual patient level which can be easily analysed by individual consultant or specialty. But one aspect of clinical performance not collected is what happens to patients after discharge.

A key problem with the sort of data already collected - and one highlighted by the DoH - is ensuring that like is compared with like. Operative death rates may vary between consultants for a variety of reasons - some of which will be beyond the control of the individual surgeon, physician or anaesthetist. The state of equipment in a hospital, the case complexity, age and other characteristics of patients are all likely to affect the crude death rate and hence make comparisons difficult. Nevertheless, how many health authorities are systematically examining their HES data to identify under-performing specialties or consultants?

Statistically, there are ways and means of dealing with such confusion. Multilevel modelling (MLM), for example, has been used to separate the effects of case-mix and surgical skill on the final operative outcome - survival. MLM has also been used in other fields, such as education, to identify the extent to which exam results are due to the quality of a school (rather than the quality of the pupils in a school).

Assuming the statisticians can produce comparative performance measures, there remains the issue of what to do with the information. Should such data be made public or kept by the Commission for Health Improvement and used in internal clinical audits? In a league table of surgical success rates there will always be some surgeons below average. A statistician may be content with a below average surgeon because much of the variation in the league table will be 'insignificant'.

A statistician may also conclude that Chelsea's performance last season was statistically no different from Arsenal's - but Arsenal came top and Chelsea didn't. So how will the general public react to hospital league tables?