As the Conservatives promise voters a shift from targets to a service shaped by outcome measures and what patients say about their healthcare experience, Alison Moore looks at why many managers are sceptical about the idea

Targets for NHS organisations have been a feature of the NHS - and the bane of many managers’ lives - for the last decade, but change may be coming. Not only is there growing interest in outcome measures, but the Conservatives are committed to abolishing targets and replacing them with a range of other measures.

Pretty much all of the data we collect at the moment is failure data

But are targets all bad? King’s Fund chief economist John Appleby confesses to being a target fan and argues they have often done a good job in driving improvements.

“We don’t seem to have any evidence of a deterioration in quality or a change in clinical priorities when waiting times have come down,” he says.

But waiting times were a real public concern and this has been addressed; shorter waits may be desirable even if they don’t bring clinical benefit. He has more doubts about the four hour accident and emergency target, where raising performance from 97 per cent to 98 per cent may have a disproportionate marginal cost for little benefit.

Some targets will be closely linked to outcome and provide useful proxies; Peter Griffiths, director of the national nursing research unit at King’s College London, points to door-to-needle time for thrombolysis as a good proxy for outcomes. Targets can also be useful when the benefits of treatment will manifest themselves much later, making outcome measurement retrospective.

But attempts to measure quality through processes falls down when there is little evidence that reaching a target or performing a process will affect the outcome for the patient. For example, Professor Griffiths says the Care Quality Commission looks at whether a tool is used to assess inpatients for pressure ulcers. He believes the available tools are not robust enough to justify this; he would prefer to see a system which looks at the incidence of ulcers.   

But if few want a bonfire of all the targets there is almost universal recognition that outcome measures could be a useful addendum, providing a different perspective on quality.  

“We need a balance of different measures. We need lots of publication and a relatively small number of targets - the questions have always been about whether the targets are good and how many there are,” says NHS Confederation policy director Nigel Edwards.

“There have to be cleverer ways of capturing the whole patient journey,” he says.

A vital part of this is likely to be patient reported outcome measures - PROMs. Data is currently being collected on these for a number of surgical procedures. Bupa medical director Andrew Vallance-Owen argues they are useful because they look at what patients actually get out of treatment - the concept of health gain. 

“Pretty much all of the data we collect at the moment is failure data - things like mortality and infections,” he says. “PROMS are as objective as they can be about whether the patient got a health gain: can they walk further? Do they have less pain?”

Collecting this sort of data across thousands of people allows judgements to be made about what treatments work best for different population groups - and which are of limited value for particular people in that patients don’t report feeling or functioning better after them, indicating they should be stopped.

Bupa started using PROMs around nine years ago in its hospitals. Despite the relatively few procedures carried out compared with the NHS, Dr Vallance-Owen believes they were useful. Results were fed back to the hospitals and led to some changes, such as the anaesthetic used for hysterectomies being changed at one hospital.

Methodological problems

But 70 per cent of healthcare expenditure is on long term conditions, where there are methodological problems with PROMs.

Picker Institute head of policy Don Redding says: “By and large [the patients] are not going to get better. Measures of single treatment are only going to be occasionally useful in their career with that condition. That presents a challenge to anyone who is saying quality in the system needs to be measured by outcomes.”

There are generic PROMs which could be used repeatedly for the same person to show how a patient is progressing. But some non-patient measured outcomes will be needed as well - for example, improving blood sugar control in diabetics may not make them feel that much better now but is very important for long term health. 

Sceptics argue that outcome measures are not a good indicator of quality. Professor of clinical epidemiology at Birmingham, Richard Lilford, has argued that they are a blunt instrument - differences in outcome are not well correlated to differences in the quality of care. He has also criticised the methodology for some uses of standardised mortality rates. This raises questions about how useful PROMs are in judging institutions.

Professor Griffiths says outcomes are affected by many aspects, and the quality of healthcare often only has a marginal effect. But he still believes it is important to focus on outcomes and refine those measures to link them as closely as possible to quality.

“I would rather grapple with the problems of outcomes than measure the wrong process,” he says.

One advantage is outcome measures are less prone to gaming than targets. Corridors may be turned into wards and trolleys into beds and targets appear to be met; but if patients feel their experience is affected, it will show up in surveys.

If targets such as four hours in A&E and 18 weeks referral to treatment were abandoned, it is likely such surveys would play an important part in keeping access times in the foreground: patients are unlikely to report a good experience if they are kept waiting for a long time.

But there may be a danger that 18 weeks will slip unless it is upheld as a standard or a right; hospitals under pressure may let waiting times extend and then lack incentive to get back on track. The experience of the relatively few patients who wait a long time in A&E or for inpatient treatment may barely affect overall satisfaction or outcome measures, while the target system concentrates on the breaches.

John Appleby points out: “We left waiting times up to individual consultants and look where we got. Consultants didn’t value it as much as patients did.”

But what of the broader, international comparisons such as five year survival rates for cancer? Dr Vallance-Owen suggests they raise important issues but points out there is often a debate about whether measurements are strictly comparable and where the system needs improvement.

Improvements in care may take seven years or more to affect five year survival rates, making the measure retrospective.

“By the time you have discovered your cancer services are wrong, someone else is in government,” points out Nigel Edwards.

The time lag involved in such indicators could mean that ongoing process targets have a use as a sign that change is happening - but they are only useful if they are related to the ultimate outcome measure. 

What the Tories are proposing

In a 2008 policy green paper the Conservatives attacked targets, saying they led to bureaucracy and distorted priorities to little clinical benefit. The paper proposed phasing them out in favour of outcome measures:

  • five year survival rates for cancer to be in excess of EU averages by 2015;
  • preventable mortality from stroke and heart disease to be below EU averages by 2015;
  • a similar target for lung disease by 2020;
  • year-on-year improvements in PROMS for long term conditions;
  • year-on-year improvements in patient satisfaction with healthcare access and experience;
  • mortality amenable to healthcare being reduced to the level of comparable countries;
  • year-on-year reductions in adverse events.  

Many of these measures may be a way of judging the effectiveness of a healthcare system but may not tell much about one organisation’s performance. There are massive differences in rates of reporting of adverse events, for example, which may reflect organisational culture rather than the actual rate.