It is already possible to predict where health authorities will be ranked in the government's new traffic-light system. But is it fair, ask Chris Deeming and John Appleby

Among the various shopping lists set out in the NHS plan is what appears at first sight to be a bit of cross-departmental working between health and transport: the NHS will soon be using 'traffic lights' to promote improvements in health service performance.

1The intention is to compile a composite performance measure based on indicators from the performance assessment framework. This will show, at a glance, the relative performance of health authorities, and will be extended to trusts, primary care groups/trusts and health action zones.

HAs may feel they have been here before; the efficiency index was also a composite measure of performance, but was widely criticised in the service. The traffic-light system seems to be an attempt to address some of its shortcomings. By combining not just measures of efficiency, but indicators covering access, the patient's experience and health outcomes, a 'balanced scorecard' of performance is promised.

But composite indicators of performance are notorious for raising the hackles of those subject to scrutiny. Will the Department of Health manage to avoid accusations that this new performance system is unfair, inaccurate and ultimately an inadequate way of measuring NHS performance in the round? In anticipation of the actual rankings to be produced by the DoH, we have compiled a ranking of all English HAs using a methodology suggested by the department.

2All businesses collect information about their performance. In this respect the NHS is no different - nor should it be. But what (and how much) information to collect can be problematic. The NHS has 50 indicators in the high-level performance indicator and clinical indicator sets and these are likely to increase.

In terms of what kind of information to collect, both the NHS and private business need to consider their reasons for existence: in other words, what are the objectives of the organisation?

Although there is a common perception that private firms find this easy to answer (profits), in fact, like the NHS, they face a potentially large array of goals and objectives.

The basis for the traffic-light system is the national performance assessment framework. And the foundations of the framework are the objectives of the NHS, each one supported by a number of performance indicators (the high-level performance indicators/clinical indicators etc).

The NHS plan sets out the broad framework of how the traffic-light system will operate. Annual 'league tables' are to be published which will classify HAs (and from next April, other NHS organisations, including trusts) as either 'green', 'yellow' or 'red' depending on their overall performance. This will be measured by an indicator composed of some, all or more of the indicators from the high-level performance indicators set.

But there is more to it than this. The plan states that organisations meeting all core national targets (for example, particular maximum waiting times as set centrally) and which rank in the top quartile on the composite performance measure will have a green light.

Yellow organisations will be meeting all or most core national targets, but will not be in the top 25 per cent. Red organisations will be those which are failing to meet a number of the core national targets. So red status will result from poor absolute standards of performance, triggering action to ensure a basic level of acceptable performance is achieved throughout the NHS (see table 1).

In effect, the traffic-light system is attempting to measure performance both in absolute and relative terms. For example, green status reflects both outstanding absolute performance against core national targets and relative performance against other NHS organisations based on the composite performance measure.

The choice of a 25 per cent boundary in the traffic lights is fairly arbitrary. Interestingly, the performance assessment framework indicators for personal social services, published in October, are presented with boundaries which vary from one indicator to another.

3These cut offs were determined on the basis on expert opinion and evidence (where it existed) that such boundaries were significant in service terms.

The issue of where in the range of the composite NHS indicator the lights should change from yellow to red or yellow to green also raises the question of the (statistical) significance of differences close to the boundaries.

The values of many of the indicators making up the composite traffic light measure are point estimates within a range of uncertainty. Hence, values for the composite indicator will also have a range of uncertainty. So, in the absence of uncertainty intervals for every authority's traffic light score, it is impossible to indentify statistically significant differences between the authorities; an authority which is is just in the red could equally be considered to be just in the yellow.

The last time the NHS seriously grappled with performance indicators (in the 1980s), it failed.

This was partly due to the lack of thought about how to structure an effective incentive system. This time the NHS plan has gone further, suggesting a number of financial and non-financial incentives.

For example, green organisations are to be rewarded with greater autonomy and will have access to the national performance fund, plus discretionary capital funds. The performance fund will start from next April, and will amount to around£500m a year by 2003-04.While green organisations will have automatic access to their share of the fund, yellow organisations will be required to agree plans, signed off by their local regional office, setting out how they will use their share to improve their services.

Red organisations will not necessarily be denied access to the fund, but the Modernisation Agency will oversee the spending of their share and, as the plan states, the extra money will come with strings attached. Along with the financial incentives of the fund, there are a number of non-financial carrots - essentially greater managerial freedom and a reduced level of scrutiny from the Commission for Health Improvement.

HAs identified as failing are likely to have performed poorly as a result of a unique combination of circumstances and will need bespoke support. The plan states that red organisations will be subject to a rising scale of intervention, reflecting the seriousness and persistence of their problems. And in theory they will receive expert external advice, support and, where necessary, direct managerial intervention - possibly from a green organisation.

Although the NHS plan goes into some detail on the various incentive mechanisms it proposes, it is somewhat hazy on the actual construction of a key element of the monitoring process - the trafficlight league table. However, all the data needed to compile such a ranking is available and, following a conference on performance measurement held at the King's Fund this year, so is a possible methodology for deriving a composite performance measure.

2In compiling rankings for English HAs we have had to assume that authorities doing well on their performance assessment framework indicators will also be meeting core national targets. We have also assumed that all indicators are given an equal weighting in the composite measure.

We have used the latest performance indicator data (published in June) to construct the rank ordering - excluding one of the 49 indicators (number of those on waiting list waiting 18 months or more) as only three authorities registered a number greater than zero on this.

Indicators for every HA were expressed in a common measure - percentage above or below the English average - and then summed to give a total percentage deviation from the national average.

Figure 1 shows the distribution of HAs on the basis of the computed composite indicator.

A glance at the names of authorities at the top and the bottom of the figure strongly indicates a North/South divide in NHS performance. There is also a strong positive association with a ranking compiled by the King's Fund this year based on just six indicators from the high-level performance indicators and weighted to take account of the public's performance priorities.

The traffic-light league table ranks Dorset top and Manchester bottom. But why? Moreover, as we noted earlier, there seems to be a North-South division between green authorities and red authorities: 71 per cent of green authorities are in the south, and 83 per cent of red authorities are in the North (counting Midlands authorities as northern).

One explanation is that the traffic-light ranking is not just measuring the performance of the NHS, but also capturing other factors which influence not just the health status of populations but perhaps also their use of health services. These other factors - which may be considered beyond the influence of the NHS - would include socioeconomic characteristics of populations.

And indeed, we found a statistically significant association between one composite measure of socio-economic deprivation (the under-privileged area score) and the traffic-light rankings. Around half the variation in the traffic-light rankings appears to be explained by the under-privileged area score - authorities with higher levels of deprivation (such as Manchester) tend to be in the lower half of the traffic-light league table.

Although the NHS itself has only a weak influence over such things, government as a whole has more power to change the social and economic circumstances of the population. So, from a joined up government perspective, the DoH and the NHS should not be considered the only parts of government involved in health.

Also, the socio-economic circumstances of local populations are already taken into account through the weighted capitation formula. So, adjusting for socio-economic factors could be considered double counting. However, if the traffic-light system is meant to measure NHS performance only, then there is an argument for adjusting the ranking to reflect the relative difficulties HAs face.

The issue of what weights to use in compiling the traffic-lights indicator is important. Giving all indicators an equal weight implies that all are, in effect, priorities. It also takes no account of potential trade-offs between indicators.

It may be possible to do well on one measure, but only at the expense of doing poorly on another, for example. Analysis of all 48 indicators shows that virtually all are correlated, and many have significant negative associations - suggesting that trade-offs exist.

While work by the King's Fund has tried to address the weighting problem by eliciting the public's values on the matter, there is no indication in the NHS plan that differential weights will be used in the traffic-light system to deal with the trade-off and values issues.

Disaggregating the traffic-light indicator to the six domains of the performance assessment framework (health improvement, fair access and so on) shows that no HA manages to rank as a green organisation in all areas (see table 2). Only four authorities manage four green lights, 15 manage three, 26 manage two, and 37 manage one. Seventeen health authorities do not manage any green lights.

Relatively efficient authorities tend to rank poorly on the domains of effective delivery of healthcare, health improvement and outcomes of healthcare.

Statistically significant negative associations exist between the efficiency domain and these other areas.

Over time one would expect to see some movement in the rankings of HAs as individual performances changed. But too much movement would suggest that the composite indicator was a rather unstable measure of performance. However, it is difficult to be precise about what degree of movement should be considered too much.

With three years' performance indicator data now published, it is possible to see how traffic-light rankings have changed over time. We compared just two years, 1997-98 and 1998-99, and found a statistically significant correlation between the two rankings.

Despite the overall association between the two years, there was an average movement of 11 places, with some large changes for some authorities. Six authorities crossed the boundary from green to yellow between the two years, and five from yellow to green. There were similar numbers of authorities moving in and out of the red zone at the other end of the rankings.

Most HAs will not be too surprised with their ranking on the traffic-light league table we have produced. Of course, the actual rankings compiled by the DoH will not be the same - but neither are they likely to be significantly different.

Apart from arguments about the technical construction of the rankings - what weights to use which best reflects society's NHS performance preferences, whether to adjust for non-healthcare factors affecting NHS performance etc - there is the little matter of how NHS organisations will behave in the face of the information the traffic-light rankings reveal. Only time will reveal any potential perverse incentives and the extent of any 'gaming' within the traffic-light system.


1 The NHS Plan, 2000. Cmd 4818-I.

2 Dunn S. A Whole-System Approach to Performance Management. Paper presented to the State of the NHS 2000: performance management conference at the King's Fund, 5 May, 2000.

3 Social Services Performance in 1999-2000. The Personal Social Services Performance Assessment Framework Indicators. Government Statistical Service, 2000.

4 Smee C. The Performance Assessment Framework: where did it come from and where is it going? In Appleby J and Harrison A, editors. Health Care UK Spring 2000. London: King's Fund.

5 Appleby J, Mulligan J. How Well is The NHS Performing? King's Fund, 2000.