By continuing to use the site you agree to our Privacy & Cookies policy

The new mortality indicator suffers from mixed messages

The debate over how hospital mortality should be measured and whether those measures reveal anything useful has rumbled on for the last decade.

The Department of Health has sensibly attempted to bring it to a conclusion by gathering a consensus around a new measure - the summary hospital-level mortality indicator.

The message is: this is important in principle, but you’ll have wriggle room if things get sticky

Unfortunately, in an attempt to calm all the concerns about how the indicator will be consumed by the public and the press - let us call it the Dr Foster Hospital Guide factor - the policy guidance runs the risk of undermining it from the off.

The guidance appears clear: “A poor SHMI would be a prima facie case for an investigation by the hospital… Taking no investigative action… would be a signal of poor board governance”. There is similar advice for commissioners.

However, much of the rest of the document concerns the limitation of the new indicator as a measure of quality. It also contains a lengthy list of caveats which must be used when presenting and interpreting the data.

The message seems to be: this is important in principle, but you’ll have plenty of wriggle room if things get sticky.

The SHMI will be “owned” by the national quality indicator development group created by the national quality board. It is clear the steering group which developed the indicator (also set up by the board) does not always agree on how it should be used, for example in guiding patient choice. Such varying views lie behind the hedged advice.

The new indicator is a definite step forward. The national quality board should ensure it is championed more robustly.

Readers' comments (7)

  • I guess that's why they decided to call it the SHMI: to give everyone enough wriggle room to SHMI out of trouble.

    Or maybe, there's a lovely new glam, Strictly Come Dancing, theme pervading DH? ... SHMIs, CQUINs, PROMs? I hear FtHR BoA and LUREx are in development.

    Unsuitable or offensive?

  • Ian Bowns

    How about a bit of heresy - process measures can be more useful in quality improvement than outcome measures. Although it is difficult to argue against the principle of measuring Outcomes, a number of commentators, such as Lilford, Brown and Nicholl (Lilford R, Brown C, Nicholl J. Use of process measures to monitor the quality of clinical practice. Br Med J. 2007;335;648-650), have given cogent arguments regarding their limitations as quality improvement tools. They particularly cite the low signal to noise ratio: outcomes are affected by many factors other than the quality of care. Risk adjustment has its limitations, and is sensitive to modelling assumptions. Finally, and possibly most importantly, poor outcome measures do not help a team or institution to know where to make improvements to its care delivery. There's a wealth of other material out there showing how this can be done (see http://www.ihi.org for examples).

    Unsuitable or offensive?

  • I think anonymous might be wrong about the initiatives due to come out of the DoH. My understanding is that the next one to be announced will be Standardised Hospital Indicator of Treatment Effectiveness.

    Unsuitable or offensive?

  • Ian obviously has only one thing to say; posting his comment three times.

    The problem with DH's indicator set, not just that we now have two mortality indicators, is their obsession with the 'one number tells all' senario.

    Firstly, the one number was high in Mid Staffs but that didn't stop the behaviour for 4 years. That behaviour included the SHA, PCT, and the Trust to debate numbers not actions.

    Secondly, one number can hide a multitude of disasters. At a hospital level it's meaningless because dozens of bad things could be occurring that are masked by the volume of patients. At a speciality level it's better but even then the data can be manipulated especially in medicine, less so in surgery.

    The solution is not a number. The solution is properly measuring outcomes beyond live or die. What about the quality of the outcome when the patient lived.

    DH aren't fit to lead this programme. Now that all targets are supposedly gone (not really) then the resources should be put into measuring quality of outcomes, something the private sector has been doing better that the NHS for 15 years. When the CQC has new leadership they should inspect.

    Unsuitable or offensive?

  • What is it about the medical profession that gives it such a thin skin?

    HSMRs are and have been an excellent "indicator" for good and poor performance for about 5 years. And DFI's annual league tables are an excellent way of presenting them for consumers. The new indicator (taking on board deaths in the 30 days after leaving hospital) will be better still if that extra data can actually be made available and fed into the formula.

    The med establishment continues to say that such numbers do not tell the whole story and should not be stacked in league tables - as if we poor dim consumers don't know what the word "indicator" means.

    Behind it is a an arrogance along the lines of "we doctors are above our actions being monitored by mere indicators.... it you want to catch us at it, you have to have a smoking gun or nothing at all."

    It's not on.

    Unsuitable or offensive?

  • Clive Peedell

    Anonymous @ 5/11/10, 7.11pm

    I think you should read the recent BMJ editorial on HSMRs. You optimism about them would soon be dulled by the evidence.
    Two recent papers in BMJ concluded that HSMRs were unfit for purpose because the signal to noise ratio is so unfavourable, and the variance simply too great to be attributable to quality of care. HSMRs are, concluded Richard Lilford and Peter Pronovost, “a bad idea that just won’t go away”"
    (BMJ 2010;340:c2016 and BMJ 2010;340:c2066)

    I am a clincian that would be delighted to have my outcomes measured. However, this needs to be done to near clinical trial standards to be meaningful.
    Since outcomes are going to be related to payment, then this becomes even more crucial. There will "gaming" of the data, just like there was for waiting lists.

    Measuring clinical outcomes is a very difficult process. It may work in some specialties, but to make it work across the board will be fraught with difficulty and associated costs.
    The risks for hospital finances, and departmental reputations are very high. Confounding factors are rife and include issues such as demographics, co-mordities, multiple disease, chronicity of disease, lack of appropriate and validated outcome measurements, as well as statistcal problems like regression to the mean.

    I'm very glad that clinicians are being cautious about this. This is not about protecting my profession. Nowhere else in the world is attempting this madness - and with good reason.

    Lets concentrate on a few important measures and identify and investigate the "outliers" to see if there is indeed poor practice going on. Publishing data nationally before it has been validated will be a dangerous thing to do.

    Unsuitable or offensive?

  • If wards are run by Ward Sisters who have had ongoing leadership training investment and are in control of their budgets and all services provided to their patients and listened to when they report risks, then HSMRs are improved.

    Ask a good Ward Sister how many staff and what bands they need each shift and outcomes will improve. Clinical and political leadership at ward level are the keys to improvements in measurable outcomes.

    Between the Ward Sister and the Director of Nursing there should be no more than one senior nurse.


    Unsuitable or offensive?

Have your say

You must sign in to make a comment.

Related Jobs

Sign in to see the latest jobs relevant to you!

Sign up to get the latest health policy news direct to your inbox