Published: 05/08/2004, Volume II4, No. 5917 Page 10 11
The latest star-ratings have left organisations which have lost out questioning the system.
Daloni Carlisle looks at how the ratings are decided - and their impact when they fail to reflect trusts' views of themselves
As record numbers of trusts began to celebrate their three-star triumph this year, rumblings of disquiet from their less successful colleagues were already becoming audible.
Taken as individual cases, this might seem like sour grapes. But the criticisms reflect some more widespread concerns about these latest results.Chief among them is the volatility of the system.
It is true that, overall, more trusts got more stars than ever before. But if the NHS was hoping for a journey of continuous improvement, then, according to the stars, progress this year has been rather more erratic.
There were the 10 foundation wannabes that lost a star - or two in the case of Chelsea and Westminster Healthcare trust - and have had to put their plans on ice. Four of the current foundation trusts also lost a star. Then there were the trusts whose performance earned the same number of points yet saw their rating fall from one star to zero.
Not to mention the happier organisations that found themselves bouncing up the ratings.
Three acute trusts and two primary care trusts leapt from zero to three stars and one acute and 10 PCTs rose from zero to two stars.
While the trusts on the up universally praised their staff and recovery initiatives, most of those on their way down are questioning the system.
And there are some big questions to be asked - about the balanced scorecard thresholds, about the statistical differences between meeting and missing targets, about the integrity of self-assessed indicators and finally about what the star-ratings now mean.
King's Fund chief economist Professor John Appleby raises an important question about this year's results. 'It is well known in composite performance measures that if there is too much movement there is something wrong. In general, too much volatility from year to year indicates the system is overly sensitive.'
Star-ratings are fiendishly complicated, measuring a range of indicators on different scales to come up with a composite score.
On the top line are the key targets - in acute trusts these are the 12hour wait in accident and emergency, cancer treatment times, financial management, waiting times and so on.
Trusts can achieve on these, under-achieve or significantly under-achieve. Missing a target means penalty points - two for under-achieving and six for significantly under-achieving. Trusts knew in advance what these key targets were and what they had to do to meet them.
The score from the key targets is only half the story. There are also the so-called secondary indicators that make up the balanced scorecard. In acute trusts this is where you find the hospital food, patient surveys and child protection indicators. On these, trusts can score from one to five points. The more you get, the higher your potential rating. Trusts end up with a balanced scorecard rating from zero (poor) to six (good).
It is the combination of the penalty points on key targets and the balanced scorecard that produce the star rating. So, a trust that has two or fewer penalty points and a balanced scorecard of five or six will get three stars.
But a trust with two penalty points - or less - and a balanced scorecard of zero would get just one star. And crucially, this could include a trust which hit all its key targets. Conversely, trusts that get three to six penalty points can win one or two stars depending on their balanced scorecard.
Chelsea and Westminster achieved eight of the nine key targets but significantly underachieved on financial management (six points). This, combined with a poor balanced scorecard, produced the one star.
It is the balanced scorecard area that makes the situation murky.
The thresholds for achieving points in each area are not published in advance. 'We knew we were going to drop a star because of the finance and I had prepared the staff for that, ' says Chelsea and Westminster chief executive Heather Lawrence. 'But losing two stars because of the balanced scorecard was a shock.'
This year there was a subtle change that meant that a trust performing poorly on its key targets (seven to 12 penalty points) with a balanced scorecard of zero would be awarded zero stars. Last year it would have won one star.
This caught out London's Barnet and Chase Farm Hospitals trust and Mid Yorkshire Hospitals trusts.
Barnet and Chase Farm rose from zero stars in 2002 to one star last year. This year it is back down to zero, despite scoring the same 10 points as last year. 'We think It is something to do with the balanced scorecard, ' says communications director Nick Samuels.
Mr Samuels also says the margins between achievement/ underachievement and significant underachievement are 'extraordinarily small' and in some cases 'almost arbitrary'. The trust had seven patients waiting more than 12 months during the stars measurement period.
He believes that if that number had been six, the trust would have kept its star. Healthcare Commission head of external relations Graham Capper points out that the national established target is for no patient to wait more than a year.He says that with any threshold (and this one was set as a percentage of total patients) there can always be trusts that only just miss it. And he points out that a zero-star trust will always have more than one area of underperformance.
Mr Samuels also questions the success of government policies developed to improve failing trusts. The running of Barnet and Chase Farm was put out to franchise two years ago when it first got zero stars. The chief executive given that franchise left earlier this year. Mr Samuels says: 'Given that we were franchised, given that we have had a huge amount of support and visits from every expert in every area, what does it say that we are still zero stars?'
Ms Lawrence is equally scathing.
'We have made significant achievements and in fact overperformed against eight of the nine key targets, ' she says. 'The rating does not reflect services at this hospital.'
She queries both the data collection and its interpretation for the balanced scorecard. 'We feel there is an element of subjectivity in many of these secondary indicators.' Along with other aggrieved trusts she cites child protection indicators, which are dependent on local authorities providing access to information.
'In another example, the trust had a poor score on its selfreported untoward incident rate.
We recorded more near misses than other places because we encourage staff to report incidents.We feel It is good practice.'
Ian Cummings, chief executive at Morecambe Bay Hospitals trust, one of four foundation hopefuls to have lost a star on the balanced scorecard, is more sanguine in his language but nevertheless makes the same point about the four secondary indicators that lost him his star.
The trust was marked down because there are not many promotion opportunities - but the trust has the highest retention rate in the country. Similarly on ethnic minority services, but 95 per cent of patients are white.
'We scored below average on our self assessment of infection control, ' he adds. 'We are in the top 25 trusts for hospital cleanliness and top 10 for MRSA.We feel we assessed ourselves honestly, but perhaps with the benefit of hindsight...' These same themes are repeated over and over. The thresholds for the balanced scorecard are far from transparent; self-assessed secondary indicators are open to interpretation; the financial indicator is too blunt and penalises trusts which have, for example, implemented the consultant contract with some vigour and found themselves in deficit as a result. 'Where's [Department of Health director of workforce] Andrew Foster now?'
asked one senior manager hoisted on this particular petard.
'The question is: who decides the balanced scorecard thresholds?' asks one seasoned star-ratings observer. 'You can move the thresholds about so that you adjust the number of three-star and zero-star trusts and It is these categories that are interesting.'
The Healthcare Commission does not accept that there has been too much volatility this year.
'It would not be credible if all trusts went up or all went down, ' says Mr Capper. The thresholds are not made known in advance to prevent gaming - trusts hitting the target and then moving resources elsewhere.
And Mr Capper stresses: 'We are absolutely independent. Yes, the star-ratings are based on government policy, but in terms of how the indicators are compiled and how the data is collected and how the model works, that is the responsibility of the commission alone.'
Ministers saw the ratings one day before the service and made no attempts to change anything. 'The fact that high-profile trusts have lost stars is part of the evidence of that guarantee of independence.' l 'Getting a zero means sod all ' Back in the early days of star-ratings, the difference between three or zero really mattered.Three-star trusts were given freedom over how to spend extra cash from the performance fund while zero-star trusts risked management franchising and oversight of their slice of the performance fund by the Modernisation Agency.
Today, there is no franchising and the Modernisation Agency's future hangs in the balance.There is no sign of the performance fund either yet, although the Department of Health says£65m has been put aside for the task.
One zero-star trust told HSJ: 'Getting a zero means absolutely sod all... It is obvious the DoH doesn't take it that seriously any more.'