Despite some major data flaws, the launch of clinical indicator tables was welcomed by many - but will they be acted upon? Lyn Whitfield reports

There was a palpable sense of trepidation in the higher reaches of the NHS as the government prepared to publish clinical indicator tables last week.

The Department of Health organised a technical briefing two days before Wednesday's launch, with six senior officials to run through the tables with journalists.

The subtext was that the tables were so complex and open to caveats about data quality that it was impossible to boil them down to 'Killer hospitals shamed in league tables' stories.

Two days later, health secretary Frank Dobson insisted that the indicators 'are not league tables'.

He described the indicators as 'a first step' and warned that doctors might turn away tough cases if they had to 'look over their shoulders to see where they are coming in the figures'.

And it worked. Newspaper coverage was much more temperate than that which greeted the appearance of the old - eventually derided - hospital league tables.

Agreeing measures to replace those tables, with their five-star ratings and 'perverse incentives' has taken years.

In January 1998, the then health minister, Alan Milburn, launched consultation on a new NHS performance assessment framework. It did not appear in final form until earlier this year.

The original proposals included 15 clinical indicators and 41 high-level indicators - proxy measures at health authority level for the six areas of the framework.

Only six clinical indicators have survived, giving mortality, readmission and discharge data by trust for the first time.

Two indicators on complications after surgery may still be published. The rest have been judged 'only meaningful at HA level' or dropped for 'technical' reasons.

Trusts were given a 'data quality mark' for their data, which was judged acceptable, mediocre or poor. Forty trusts (17 per cent) had 'poor' data and were excluded from the tables.

Officials said that the main problems were 'hospitals not doing a good job' and problems with central information gathering.

This is hardly news to Frank O'Sullivan, chief executive of Northern Devon Healthcare trust, picked out as one of the poorest performers on deaths in hospital after surgery. Unfortunately, he says, 'when we got these figures, which was very, very late, we realised there was a glaring error'.

Instead of dividing the 13 deaths following planned surgery by 5,168, the number of elective admissions in 1997-98, the deaths had been divided by 160 - a 32-fold error. Mr O'Sullivan says he lobbied hard, with support from his HA and region, to have the trust removed from the tables. Instead, a caveat about the data was included - on page 76.

The trust arranged detailed briefings about the problem. Local radio and television covered the issue. 'If it had to happen, I think we minimised the damage. The question it raises is one of process,' says Mr O'Sullivan.

'Trusts should be given an opportunity to comment,' he says.

But John Appleby, King's Fund director of health systems, says: 'We need to get on and use the data, instead of saying it is all too difficult to be useful, which is what Frank Dobson seems to have come close to saying.

'It is good for HAs and trusts to be put under pressure and made to explain why they should be at the top or bottom of the lists.

'If the answer is that the data was not very good, that is not very impressive and they should be made to improve it.'

Despite the problems, the indicators have been given a surprisingly warm welcome.

Dr Kieran Walshe, senior research fellow at Birmingham University's health services management centre, feels that both sets of indicators are 'quite clearly tied into' the performance assessment framework, and given a clear rationale. 'They have obviously listened to the consultation,' he adds.

Nigel Edwards, policy director at the NHS Confederation, feels that the indicators are 'excellent'.

'This is not just about performance. It is about giving HAs more ammunition to improve local healthcare,' he says.

Which is, of course, another question. How will the tables be used? DoH officials admit that the indicators would not have picked up the Bristol heart surgery scandal, because it involved too few children in one small area of surgery. Instead, they see the indicators as a tool for clinical directors and managers to monitor performance variations and as a catalyst for action.

But Mr Appleby wonders whether regional offices will also ask questions - and what they will do if NHS organisations refuse to act on problems. 'There is no point having information for information's sake,' he says.