A system of priority scoring would be a more equitable way to determine how urgently patients need treatment. Peter Adams reports on a pilot project

The national priority scoring pilot project began in west London's Ealing, Hammersmith and Hounslow health authority in May 1997 to pilot a new way of prioritising patients for waiting lists and is due to run until March next year.

The current method of prioritising patients is decades old and relies on terms such as 'urgent', 'soon' and 'routine' to denote relative urgency.

In recent years, trials of new prioritisation methods, involving priority scoring, have begun in the UK, based on experiences in countries such as Canada and New Zealand.

Priority scoring uses a points system to give each patient a specific score, which in turn determines how long they wait for surgery, and possibly whether they are to be placed on a waiting list at all.

The score is derived by assessing the patient against a number of relevant and explicit clinical and other criteria, such as progress of the disease, pain, disability and interference with lifestyle.

It has become apparent that there is considerable misunderstanding and variation between clinicians and between hospitals over the time periods signified by the terms urgent, soon and routine, particularly the last two categories. This may be one of the causes of the low correlation between urgency and waiting times exhibited by the current system. Scoring offers a way of avoiding ambiguities.

Priority scoring makes prioritisation transparent and structured. It could also enable better use of resources, as suggested by a study of coronary artery bypass graft patients.

1Priority scoring also explicitly weights each component or factor to produce an overall score.

Guidelines that do not work like this are difficult to audit as different clinicians will weight components differently.

There are many other potential benefits, including involving and informing patients more in the decision about surgery. It is not surprising, then, that many doctors are enthused by the prospect of priority scoring - indeed, the British Medical Association broadly supports the concept.

2But there are doctors who do not agree, or who feel that it may be too difficult to implement or make workable. The government's policy on cutting waiting lists, for example, is just one of the factors that hinder effective priority scoring.

Piloting the scoring system The pilot project has implemented priority scoring in an incremental phased programme at Charing Cross, Hammersmith, Ealing, West Middlesex University and Moorfields Eye hospitals.

It was granted national pilot status in November 1997 and shadow scoring started in January 1998. The scoring went live in autumn 1998. The first specialties included were orthopaedics, urology, and ophthalmology (cataracts).

Phase two of the project began in early 1999, with all the trusts implementing live scoring in one or more of these specialties. The scope is being extended to include GPs scoring patients before making referrals to outpatient clinics.

The project has built on experience in Salisbury and in New Zealand, where different types of scoring form - procedure-specific in New Zealand, more generic in Salisbury - were used.

Our project uses both types, though the design has been refined somewhat. The orthopaedic form is Salisburybased and fairly generic; the cataract form is, not surprisingly, procedure-specific, while the urology form is a mixture of both approaches.

The acid-test questions for priority scoring are:

Is priority scoring a more equitable, consistent, and rigorous method of prioritising patients than current methods?

Is priority scoring more likely to equate waiting time and priority more efficiently than current prioritisation methods?

Indications so far are that the answer to both these questions is yes. There are many other factors that influence these observations, so it is important not to read too much into these results. But the indications are very positive for priority scoring.

Implementing priority scoring will have a significant impact on hospital procedures, information flows and information technology systems, not to mention the clinicians who will be using it. Several factors are vital to success:

the involvement and support of clinicians in designing and piloting scoring systems;

the commitment of key clinicians and managers to maintain momentum;

establishing appropriate care processes and procedures;

defining information requirements and data items that make priority scoring useful to clinicians and managers;

modifying and integrating the IT systems necessary to support priority scoring;

applying change-management principles to the introduction of priority scoring;

training staff;

ensuring effective programme and project management to co-ordinate the introduction of a common priority scoring approach into different organisations, and to integrate it with interrelated initiatives such as health improvement programmes.

Getting clinicians to agree on a scoring form design can take time. The best way to make progress is to identify those clinicians who show most interest in trying priority scoring, even if they are few in number at first.

Shadow scoring is important as an interim stage. It is also important to realise that it is impossible to implement the ideal scoring system straightaway. People need to accept that a workable scoring system with recognised limitations is better than the perfect system that is never actually used.

From an evidence-based medicine perspective, procedure-specific scores are desirable. But for consistency and comparability, it is best to use a single generic scoring form for all patients.

However, this was not practical in the time available.

There were many reasons why specialists wanted to design their own forms, including a perceived need for a form reflecting criteria particularly relevant to certain specialties, and anxieties that one specialty might lose out to another over contract funding.

It is essential to define and use a priority scoring data set before implementation begins. Good data is important both to manage waiting or booking lists and to understand the impact of scoring.

Sufficient local resources must be made available: each hospital needs at least one clinical champion and one manager who can lead on implementation.

Though priority scoring is not necessarily about rationing healthcare, it does offer a more sensible way of explicitly setting clinical thresholds for access to services.

Most people acknowledge that thresholds exist now anyway - the real problem is that they are unequal across consultants and trusts, and vary according to factors that often have nothing to do with patients' clinical need.

It should be stressed that if clinical thresholds are to be developed, this process must be clinically led, with significant involvement from primary care. The project shows that clinically led thresholds can be implemented successfully.

The government has promoted the idea of booking surgery dates at the time of referral, and it is sensible to consider implementing priority scoring alongside these booking systems to help ensure that patients are allocated dates fairly and consistently on the basis of need.

Policy makers should give serious consideration to the implications of priority scoring for the NHS as a whole. The national pilot project has shown that it is possible to establish priority scoring systems across different hospitals and specialties, provided there is appropriate support, expertise, clinical involvement and local commitment. An increasing number of local priority scoring projects are running across the UK, varying in scope and degree, and interest is growing. This will help to generate more experience of priority scoring and its benefits.

The establishment of a national framework for priority scoring, which also helps promote local developments, would be the best way to take this forward. A proper health economic evaluation of priority scoring, compared with current methods, should also be funded.

Finally, the government should support the views of clinicians and adopt an approach to waiting-time management that supports clinical measures of prioritisation. In the government's drive to modernise the NHS, it must ensure that a more modern system of booking patients goes hand in hand with a modern system of prioritisation based on priority scoring.

Case study: urology scoring at West Middlesex University Hospital About 600 patients have been scored at West Middlesex University Hospital, and since going live with the scoring system 222 have been admitted for operations.

The scores in urology range from one to 10, with 10 being the most urgent. These scores have been apportioned to six time bands, varying from the most urgent (scores eight to 10), equating to a maximum wait of four weeks, and the least urgent (scores one and two), with a wait of 12 months or more.

Though numbers are small in some of the intermediate bands, the figure (above) shows the striking correlation between mean waiting times and priority scores for both day cases and inpatients. This is much better than before priority scoring was implemented. The graph does not include inpatients with scores of one or two because these non-urgent patients are still awaiting their procedures.

Hugh Rogers, lead clinician for urology and director of elective services at West Middlesex University Hospital, says: 'This data confirms my belief that priority scoring is a better way of prioritising patients and ensures patients are treated on the basis of clinical need.'

Key points

Assessing patients for nonurgent surgery according to a numeric priority system has rationalised waiting times in three specialties in a pilot project.

Traditional scoring systems result in little relation between urgency and waiting times.

Enlisting doctors' support for a scoring system takes time.

REFERENCES

1 Langham S, Soljak M, Keogh B, Gill M. The cardiac waiting game: are patients prioritised on the basis of clinical need? Health Services Management Research 1997; 10: 216-224.

2 British Medical Association policy and economic research unit. Waiting List Prioritisation Scoring Systems - a discussion paper . BMA, 1998.

Peter Adams is a partner at Adams Training and Advisory.