Private companies are making nice profits from producing highly contentious hospital league tables. is not it time the NHS built up its own expertise in collating performance data? Laura Donnelly reports

Fast and crude. That is the way they like it. When it comes to data on the seductive topics of death and money, the media knows exactly what it wants: stories that spell out in no uncertain terms your chances of dying in a hospital near you. And league tables laying out the cost of treatments.

The NHS is a rather more sensitive soul. It prefers to linger and tease out the finer points of performance and clinical indicators, shying away from anything as tawdry as a 'league table'.

Sophisticated discussions around the use of the data stay within the NHS family and a handful of economists dedicated to the cause.

Nonetheless, in a modern NHS, access to information is categorically 'a good thing'. In days when the consumer is king, the patient needs to know.

But it is not just those at the bottom of the league who feel uneasy when data is released.

The whole NHS - and its political masters - demonstrates its disquiet every year when reference costs for the NHS are published. These figures are handled by the Department of Health as though it were dealing with an explosive device. Technical briefings by officials demonstrate the fine art of boring or confusing as much of the media as possible, discouraging it from drawing any stark conclusions.

The result? Half a dozen trusts complain that their own data was incorrect, hospitals deemed most expensive or where patients are most likely to die turn out to have good reasons why this is so, and economists argue the toss over whether the figures are adequately weighted or refined.

The most recent NHS data, published in November, pulled together cost and clinical indicators, with seven indicators examining clinical practice at trust level.

Perhaps the most controversy surrounded the number of deaths in hospital within 30 days of surgery, given by health authority, for both emergency and non-emergency admissions, and cluster listings of the same statistics at trust level.

A table showing deaths from all causes - a figure examining mortality on an HA-basis - probably provided more sound data, since the proportion of deaths that actually take place within a hospital setting will always be relatively small. But that would spoil headlines on 'killer hospitals'.

Two factors are adding weight to the debate. One is the Internet, the consumer's friend, offering access to services that rank and benchmark data on performance and healthcare quality. The second is the way in which private companies can now harness publicly available information to their own ends.

Earlier this month, publishing company Dr Foster published data in the Sunday Times called the Good Hospital Guide, ahead of the launch of its website comparing hospitals in the UK and Ireland. This included a mortality index for England comparing deaths in hospital and statistics on the numbers of doctors per 100 beds. Perhaps ludicrously, it carried the same 'this is not a league table' health warning that the DoH uses. But given the attached list of 20 'hospitals with lowest death rate' and 20 with 'highest death rates' it was not surprising that some sections of the media mistook it for one.

Walsall Hospitals trust had the highest death rates, University College London Hospitals the lowest. UCLH was delighted, Walsall not so - chief executive John Rostill attacked the overall conclusions of the study, which suggested that the ratio of doctors to beds was the leading factor in causing high death rates, as 'obscene'.

This week he was backed by healthcare pundit Roy Lilley, who pointed out that hospice services (with their '100 per cent death rate') within Walsall Hospitals trust might have skewed its position in the rankings.

Dr Foster shrugged off suggestions that factors such as deprivation and co-morbidity had not been adequately taken account of, insisting its research had shown neither factor to be powerful.

But beyond issues over quality an ideological debate rages on.

The concept of using public data for private profit can be seen in two ways. Dr Foster managing director Tim Kelsey believes the project is a great example of a private company 'taking some of the strain off the NHS'.

Many sources from within the NHS feel differently, questioning whether it was 'morally right' that a private company should harness the efforts of health service staff to collate and update information for their own profit. As one put it:

'People in the NHS have been working our arses off to give this private company a fat profit. '

Ifwe need to draw firmer conclusions about what the data means, shouldn't the NHS be doing that itself, they argue.

Already a number of private companies work as consultants, helping trusts formulate or break down their data for the NHS reference costs. Among these, B-plan is upping its game by providing benchmarking services via the Internet - allowing trusts to work interactively with data and ultimately choose their own peers.

Nigel Edwards, policy director of the NHS Confederation, agrees that the NHS needs to build its own expertise in collating and benchmarking clinical and performance data. This could mean using a third party for the work, but at reduced cost to the NHS through some form of national policy, he says.

Now media coverage around the Dr Foster data has given the debate an added urgency.

Mr Edwards says: 'It is a very good point. The DoH has pretty much admitted that its reference costs are dodgy, so is not it time they did something about it?'