By continuing to use the site you agree to our Privacy & Cookies policy

How predictive modelling can help reduce risk, and hospital admissions

Accurate prediction of patients at risk is central to preventing admissions, but funding to develop predictive models has been withdrawn by the DH. Geraint Lewis and colleagues look at some of the tools available to local commissioners now charged reducing admissions.

Following the Department of Health’s decision not to fund future updates of the predictive models it had previously commissioned – PARR++ and the combined predictive model – the development of these models will be left to local commissioners. Here we look at what the future might hold for predictive modelling and some practical pointers for commissioners as they begin to navigate the market.

Unplanned hospital admissions have been estimated to cost the NHS £11bn a year. If these admissions could be predicted and prevented then not only might the quality of care and health status of the patients be improved, but there could also be large net savings for the NHS as a whole.

Key to this preventive approach, however, is the need to predict accurately which patients are at risk of having an unplanned hospital admission – and that is where predictive models, or risk stratification tools, come in. These are a group of statistical models that are usually applied to routine health data, which aim to predict future adverse events.

What to predict

What should we aim to predict? In principle, predictive risk models are useful for predicting any event that is undesirable to the patient; important to the health service – which usually means costly; preventable; and recorded in routine administrative data.

Examples include admissions and readmissions to hospital and the start of intensive social care. Models are able to predict admissions over a specified timeframe – for example over the next 30 days, six months, 12 months or even longer. Some models are designed to predict multiple admissions (for example, two or more unplanned admissions in the next 12 months).

Different predictive models have different demands for data so this can become an important factor when choosing a model. It is important to consider what data is readily available and accessible, and what effort and expense would be required to access further data.

Options include the secondary uses service dataset, which is a compilation of hospital data; GP data; community services data; social care data; and variables from the census. For unplanned hospital admissions, secondary uses service data has proved to be easy to access and it has the most powerful predictive variables.

With the exception of census data, all of the other data sources record information at the level of the individual patients. This raises important issues in terms of information governance, confidentiality and data security. In 2006, the Patient Information Advisory Group at the Department of Health published clear guidelines on the acceptable use of predictive models in the NHS. Most importantly, all data should be in pseudonymous format, and predictive risk scores should only be made available to clinicians who are already known to each patient.

Another key consideration for implementation of predictive models is their cost, which includes the cost of the predictive model itself (that is, the mathematical algorithm); the cost of the software on which it is run; the cost of obtaining the data that the model analyses; and a variety of labour and dissemination costs. Some primary care trusts have implemented predictive models using in-house skills, while others have purchased algorithms and software from external agencies – or a mixture of both.

Accurate predictive models

There is a wide range of statistics available for comparing the accuracy of different predictive models, including the R-squared and the C-statistic. However, two particularly useful and intuitive measures are the positive predictive value (PPV) and the sensitivity of the model.

PPV looks at the accuracy of the list of high risk patients, whereas the sensitivity takes a population perspective. Both are important. For a model that predicts unplanned hospital admissions, a high PPV means that many of the patients identified as “high risk” by the model would, without intervention, have experienced an unplanned admission. A high sensitivity means that a large proportion of the population who have an unplanned admission will be identified by the model. For any model, the PPV and sensitivity can be traded off against each other by choosing patients with different risk scores.

It is important to engage local clinicians when implementing a predictive model and clinicians need to understand how the predictions made by the model can help them in managing their population with long term conditions. However, we should remember that research suggests that clinicians do not make accurate predictions about future admissions, so it is possible that they will not always agree with the predictions made by the model.

Reducing admissions

People often ask how effective is a particular predictive model at reducing hospital admissions. The answer is always: not at all. And that goes for all models. A predictive model will simply tell you which patients are at risk of your chosen outcome, be it readmission in the next year, or admission to a care home in the next six months. What a model cannot do is to prevent a deterioration in a patient’s health status or reduce their risk of admission.

Clearly, therefore, the choice of model needs to form part of a wider population management strategy and the effectiveness and cost-effectiveness of an associated intervention is key. If a hospital avoidance scheme, such as a community matron service, is to make net savings, then the cost of the intervention per patient must be lower than the average expected cost of unplanned hospital admissions for those patients.

There is a balance to be made between the cost of the model and its associated intervention, the risk scores of the patients offered the intervention, and the effectiveness of the intervention in preventing unplanned hospital admissions for those patients.

Balancing these three factors shows that there are considerable opportunities to generate net savings for the NHS while improving the health status of high risk patients.

However, the evidence for the cost-effectiveness of hospital avoidance schemes based on predictive modelling is currently limited.

One such scheme is known as virtual wards, and the National Institute for Health Research is currently funding a major evaluation of virtual wards in three parts of England. There are also randomised controlled trials of virtual wards underway in Toronto and New York City.

Which patients?

Which patients should we focus on? The Kaiser pyramid illustrates the distribution of risk across a typical population (view diagram). Predictive models are able to identify which patient belongs where on next year’s pyramid. The shape of the pyramid shows that the very high risk group constitute only a tiny proportion of any given population. However, these high-risk people each account for a disproportionately large amount of future service utilisation.

Moving down the risk pyramid, the population size increases. So although in these lower segments each individual accounts for a smaller proportion of future utilisation, there are many more people here so in aggregate they actually represent a greater proportion of future utilisation.

It is important that the intervention is targeted carefully at the right population after having taken account of the expected effectiveness of the intervention. For example, it is not cost-effective to enrol people in intensive interventions such a community matron service or virtual wards unless they are at very high predicted risk.

As the DH begins to leave predictive modelling development up to the free market, the intention is that existing organisations involved in the field will begin to innovate further. New model vendors are expected enter the market, and commissioners may be offered more choice of models, and competition may help push prices down.

Whether this will all be borne out in reality is yet to be seen. The market is already relatively busy with a range of commercial companies and non-commercial organisations having developed their own products. There have also been some calls for the NHS commissioning Board to undertake a procurement of one or more national predictive models. Whatever happens in the years to come, it is likely that predictive models will play an increasingly important role in the work of both commissioners and providers of care for patients with long term conditions.

Future modelling

We can look to the US for insights into where the science of predictive modelling may go in the years ahead. The insurance basis of the US system means predicting risk is core business there. One idea being developed is so-called “impactibility models”. These seek to identify the sub-group of high risk patients expected to be most amenable to preventive care.

Various approaches are being pursued, including:

  • prioritising patients with particular diagnoses, for example patients with one or more of the so-called ambulatory care sensitive conditions;
  • prioritising patients with multiple “gaps” in their care (ie, a difference between what we would expect from evidence-based guidance and the care that is recorded in the GP record);
  • excluding “difficult” patients (eg, patients with mental health problems, alcohol and substance misuse issues, language difficulties).

All three approaches offer the opportunity to improve the efficiency of hospital avoidance interventions. Prioritising patients with particular diagnoses and prioritising patients with multiple “gaps” in their care should also improve equity because ambulatory care sensitive conditions and gaps in care are both more common in more deprived areas. However, excluding “difficult” patients is likely to worsen inequalities and therefore should be guarded against

Readers' comments (1)

Have your say

You must sign in to make a comment.

Share this



Related images

Related Jobs

Sign in to see the latest jobs relevant to you!

Sign up to get the latest health policy news direct to your inbox