winter planning - Capacity planning is challenging enough without the apparently random variables of winter.But a new model suggests changes are more predictable than we realise.Nathan Proudlove and Chris Brown report

Published: 31/01/2002, Volume II2, No. 5790 Page 24 25

Some believe that the volume of daily emergency admissions is random and cannot be predicted.

But a look at historical patterns allows forecasts to be made that are useful for strategic capacity management and monitoring demand.

Capacity planning is a major challenge in any system with variable demand and very little slack.

Demand from emergency patients is highly variable and acute hospitals are operating at very high levels of occupancy.

NHS guidance is for an average occupancy level of 82 per cent, but most acute hospitals in north west England are operating at levels well above this. And shorter waiting-time targets for elective patients make balancing emergency and elective demands more critical.

In terms of planning, winter has been the critical period. The Winter and Emergency Services Team report on last winter says forecasts of emergency demand are necessary for realistic capacity planning.

1Department of Health guidance for capacity planning recommendsthat such forecasting is carried out.

2But as the team found, this forecasting activity is poorly developed in many parts of the NHS. North West region is unaware of any other regions carrying out forecasting of the type we describe here.

3The Met Office is developing a forecasting system with the DoH, parts of which are being piloted this year in a sample of hospitals.

2,4,5,6 It focuses on referrals from primary care and weather forecasting.

For the past three years, researchers from the University of Manchester Institute of Science and Technology have been providing planners at North West region with long-term (up to one year in advance) daily emergency admissions forecasts.

Separate forecasts are produced for each of the four geographical patches of North West region, each split into patient categories: ages 0-2; 3-17;

18-74; and 75+. The over-75s are divided into two groups: those with respiratory and unusual conditions, whose incidence increases in winter, and those with other conditions.

The work consists of forecasting daily admissions levels and modelling length-of-stay distributions.

These are then combined to give daily forecasts of bed requirements.

Historical data for daily admissions in each patch and patient category from 1993-94 to 2000-01 have been analysed. Initially these were generated from admissions records, but now discharge history data is used as it is more accurate. Significant underrecording of admissions was still evident in recent end-of-year accounting data, so adjustments had to be made.

The dates of public holidays vary from year to year, so the effects of these were estimated first and the data adjusted accordingly. Forecasting techniques were then used to produce base-level, long-term trends and seasonal factors.

The seasonal factors are the strong effects of day-of-week, week/month-of-year variations. A particular example is the clear winter peaks observed in some of the patient categories. The model's forecast for a particular day consists of the level and trend, modified by seasonal and, where appropriate, public holiday factors. These models are then used to generate forecasts for the next accounting year.

These simple models work as well as, or better, than the more mathematically complex techniques as they can be implemented using spreadsheets and are more easily interpreted. Figure 1 shows some of the results from last winter. The average percentage error is about 6 per cent.

Mathematical models can do no more than assume that the future will be structurally like the past, while history illustrates that, for example, winter effects vary in timing and intensity. But these models give accurate enough predictions for strategic capacity planning, such as the annual planning cycle suggested by the DoH.

2 They could also act as 'base-level' predictions on which could be superimposed the much shorterterm 'spike' effects of the weather and flu epidemics (which can be forecast or picked up and projected by systems such as those being piloted by the Met Office) and the 'what-ifs?' suggested by the DoH.

2Initial attempts at modelling length-of-stay involved forecasting the daily mean length-of-stay for each patient category in each patch, in the same way as for daily admissions levels. All patients in a particular category and patch admitted on a particular day were assumed to be discharged after the forecast mean length-of-stay.

But this produced unrealistically 'lumpy' discharge profiles and the end-of-accounting-year censoring in the daily mean length-of-stay history was severe.

In addition, the predictive models of mean length-of-stay were not as accurate as for the admissions volumes, because the mean length-of-stay can be biased by one or a few patients with very long stays.

Recently, a different approach has been taken. Two (accounting) years'worth of discharge file data was used (nearly 1.2 million discharge records) to give length-of-stay distributions showing the proportions of patients discharged on each day following admission. Patients were categorised according to their age on admission and, for those aged 75+, from the diagnosis code.

The mean length-of-stay of emergency patients was influenced strongly by a few patients with very long stays (one was as long as 57 years). For example, in one data set, though only 0.02 per cent of patients had stays of more than two years, it increased the overall mean stay from 8.0 to 8.9 days.

As well as biasing the overall length-of-stay, these very long stays may represent past practices that are unlikely to be repeated for newly admitted patients.

This suggests that crude measures based on mean length-of-stay should be treated with care. The very-long-stay cases were therefore removed from the data before modelling, as were the very small number of invalid records.

Length-of-stay within a patient category was not significantly seasonal. But the length over all categories did vary according to the season. Figure 2 shows the discharge patterns from the history data (the discharge files). The seven lines on the graph represent the proportions of patients discharged each day, following admission on each of the seven days of the week. Interestingly, graphs such as these and of overall discharge rates against day of week show that there is no surge of discharges early in the week; instead discharge activity generally builds up on Thursday and peaks on Friday. This suggests the weekend, rather than delaying discharge, results in a Friday rush to discharge to avoid patients staying in. It seems the discharge pattern of many hospitals may raise issues about appropriateness of discharge, and quality of care, rather than 'wasted' bed days.

Analysing these graphs for each of the 20 data sets suggests some differences in practice between patches - for example, in the proportions of patients discharged the same day as admission and the day after admission. It would be interesting to analyse this at trust level to identify differences in practice and the effects of initiatives to modify discharge practices, by comparing patterns before and after.

The proportion of patients discharged on each day following admission was therefore modelled as being dependent on time (number of days) since admission and day of the week of stay (or potential discharge). For up to a week after admission, the observed patterns of discharge proved difficult to model for some patient categories, so the observed values were used, with the mathematical model taking over from this point.

Discharges also varied significantly, depending on the day of week of admission. Seven different models were produced for each data set - a total of 140 models.The models were constructed using Microsoft Excel, so they could easily be used at North West region to update forecasts as more data become available, and to investigate 'what if?' scenarios.

Despite being spreadsheet-based, it takes just minutes to run the entire set ofmodels on a fast PC.

A 'what if?' analyser has been built, so the effect of changes in level or trend at particular times can be investigated. Scenarios based on past years can also be examined; for example what 2001-02 might look like if the overall general volumes were the same as those of 1999-2000.Work continues on developing the models and comparing approaches.An interesting development of the 'what if?' facilities would be to enable the investigatation of the possible effects of weather, flu epidemics or general winter 'spikes' of different magnitudes and starting points, based on examples in the historical data. l Dr Nathan Proudlove is lecturer in operational research, Manchester School of Management, University of Manchester Institute of Science and Technology, and associate of the Health Organisation Research Centre.

Chris Brown is emergency care coordinator, North West regional office.

Key points

It is possible to forecast emergency admissions and bed requirements with considerable accuracy up to a year in advance.

A model is being developed which would analyse the effects of weather, flu epidemics and different surges in demand.

The discharge pattern of many hospitals indicates a rush to discharge by the weekend that could raise concerns about appropriate care.

REFERENCES

1Department of Health.

Winter Report 2000-2001.

DoH: 2001.

2Department of Health.

2001/2002: arrangements for whole system capacity planning - emergency, elective and social care. HSC 2001/014: LAC(2001)17; supporting documents: Issues to be addressed in capacity planning for 2001/02 and Local checklist, 2001.

3Sergeant A. Presentation at Bed Management in the NHS.

Conference, Scientific Societies Lecture Theatre, London, 24 September, 2001.

4The Health Forecasting Unit, The Met Office.

Forecasting the nation's health, 2001.

5Dobson R. 'Health weather forecasts' to be piloted in England. Bri Med J 2001; 322: 72.

6 White C.Weather reports to be used to forecast NHS workload. Br Med J 2001; 323: 251.