Replicating a promising change programme from one organisation to another often falls short of the original’s success. Jo Bibby explains what can be done to ensure an effective transition

Pastry, rolling pin and biscuit cutters

Replicating one organisation’s successful change programme in another is not just a case of ‘copy and paste’

Cookie cutter

Programmes to improve quality are a well established feature of modern healthcare systems. When initiatives are seen to be successful, attempts are frequently made to replicate and spread them elsewhere. Yet too often many potentially promising programmes fall short when implemented in a new setting.

‘Successful replication and spread of improvement efforts depend on a deep understanding of how and why programmes work’

With the health service under immense pressure to deliver improvements, a recently published report from the Health Foundation explores the reasons why such shortfalls might occur. Lining Up: How do improvement programmes work? is the second report drawing on learning from the foundation’s research project.

Led by a team from Leicester and Birmingham universities, the report set out to explain what lay behind a famous US improvement programme, Keystone First, and then to explore what happened when an initiative it subsequently inspired, Matching Michigan, was launched in England.

It shows that successful replication and spread of improvement efforts depend on a deep understanding of how and why programmes work, as well as the context in which they are introduced. It underlines the importance of viewing improvement programmes in terms of their effect on specific measures of quality, and as the beginning of creating a sustainable improvement culture.

Keys to sucess

The Keystone programme, in the US state of Michigan, attracted worldwide attention in 2006, when it reported a dramatic reduction in rates of bloodstream infections linked to central venous catheters (CVCs, often referred to as “central lines”), in over 100 intensive care units.

To understand how Keystone worked, the Lining Up team interviewed two of its leaders, examined Keystone’s original plan and, using social science theory, produced a retrospective interpretation of how and why the programme was successful.

While popular accounts of the programme suggested its achievements were down to a simple checklist of five evidence based interventions, the research team found that the reality was much more complex. Six main features seemed to be behind Keystone’s success:

  • Social pressures to join the programme: The ICUs involved came to perceive that it was unacceptable and damaging not to participate in or adopt the programme’s policies.
  • Convincing participants that infections were a problem with a solution: Keystone showed that CVC infections were not an unavoidable technical problem, but a social problem capable of being solved. The programme achieved this by telling personalised, hard hitting stories of lives blighted by central line infections, and then by using data to show infection rates were variable and infections often avoidable.
  • Creating a sense of community: Over the course of the programme, participants came together in a range of ways, including teleconferences, residential workshops and networking sessions. These events, alongside the distribution of programme tokens such as wristbands, created a networked community of people committed to the programme’s goals.
  • Interventions that worked in more than one way: The programme used multiple interventions, not just one. For example, the checklist of interventions known to prevent CVC infections was more than a mere roster of actions to be performed. Nurses came to feel safe in speaking up if a doctor did not comply with the checklist, because the evidence so clearly showed the five interventions were in the patient’s interest. By exposing discrepancies between actual and ideal practice, the checklist highlighted each individual’s responsibility for infection control.
  • Harnessing data to drive performance change: The Keystone team collected data from ICUs and then fed them back with anonymised rates from other units taking part. This encouraged self-monitoring and stimulated action where needed, motivating ICUs to match the performance of those with the lowest rates. Many ICUs reported sharing the data throughout their unit, often posting performance reports on bulletin boards in staff lounges or conference rooms.
  • Deploying tougher tactics: Though much of the programme was based on consensus and self-determination, it did make some judicious use of harder tactics. These included pressure on hospital chief executives to purchase necessary supplies or risk being excluded from the programme.

Keystone showed that large scale improvement programmes can change attitudes and culture, as well as address specific aspects of quality. But could its success be replicated elsewhere? And what impact would a new context have on the programme’s ability to achieve its aims?

‘The team discovered that while the rate of infections declined over the course of the programme, it was difficult to assess the extent to which the initiative contributed to this outcome’

To answer this, the Lining Up team were given the opportunity to study, in real time, the implementation of a two year NHS programme based on Keystone. The National Patient Safety Agency launched Matching Michigan in England in 2009, with the aim of repeating the successes of the US programme.

They used ethnographic techniques to observe the culture and behaviour of ICUs as they implemented the programme, spending over 900 hours on 19 participating ICUs, observing care − including central line insertion − on the units. They also conducted interviews with staff, attended programme training events and analysed relevant documents.

The team discovered that, while the rate of infections declined over the course of the programme, it was difficult to assess the extent to which the initiative contributed to this outcome.

They found that technical practices were generally very good, but that the broader set of factors shown to be relevant in Keystone were highly variable and depended on national, local and internal context.

Differing responses

They also found that responses to the Matching Michigan programme differed across participating units, with three characteristic responses:

  • Transforming: In one ICU observed by the team, staff saw the programme radically improved care after local leaders fostered changes to practice, culture and behaviour using the programme’s principles, practices and resources.
  • Boosting: In five ICUs, the programme was credited with reinforcing good practice or supporting further improvement, enabling staff to ask for resources or persuade reluctant colleagues to conform to good practice.
  • Low impact: In 11 ICUs, staff attributed little of their practice or behaviour to the programme, seeing the influences on what they did as originating elsewhere.

‘Little change occurred if consultants were not persuaded that the programme was grounded in high quality evidence’

These findings indicate that in some places the programme was able to support wider and deeper change, beyond the reduction in infection rates. The researchers suggest there are a number of reasons why these differences in impact occurred.

They discovered that, though Matching Michigan reproduced many of the components of the original Keystone programme, it did not precisely reproduce many of the features that were theorised as important to its success. For instance, no sense of community appeared to have developed between the participating units.

Historical context

The history and context into which Matching Michigan was introduced was also found to have a significant impact on how units responded. In particular:

  • ICUs in England had already made significant progress in implementing good practice in CVC care, with the result that the starting infection rate was much lower in England than it had been in Michigan at the start of the Keystone programme.
  • Many participants saw the English programme as imposed from outside and lacking in professional ownership. Locating the programme in a government agency rather than a professional organisation or research collaboration intensified this effect, alienating some frontline clinicians. The history of target driven initiatives in the 2000s led many NHS staff to have a deep suspicion of centrally led government programmes. They often experienced these previous programmes as invasive and autocratic and were suspicious that Matching Michigan would follow that trend.
  • Some participants feared the potential for data collected through Matching Michigan to be used for public shaming in league tables or to exact financial penalties.

Organisations varied in the commitment and enthusiasm they invested in the programme and its impact depended on senior medical and nursing staff buying into it.

‘By understanding the mechanisms that make a programme successful initially and identifying the contextual and historical influences at play, the chances of its success elsewhere will increase’

Little change occurred if consultants were not persuaded that the programme (especially its cultural change elements) was grounded in high quality evidence, or saw it as an illegitimate bureaucratic intrusion into professional work.

One of the strengths of the NHS is that, with the right learning and support systems, it has the potential to spread good practice across the system to the universal benefit of patients. However, realising this potential is not straightforward.

Understanding the totality

Professor Mary Dixon-Woods, who led the Lining Up research, urges those involved in improvement to understand the totality of programmes.

“If you’re going to launch a new drug, you would do a lot of work to characterise the mechanisms of action and theorise how it was going to work in the body, then put it through a whole series of stages to establish whether and how it works,” she says.

“We would want to know what equipment and training for staff was needed and how to counter any side-effects. In future, I’m hoping we will do the same with improvement programmes.”

By understanding the mechanisms that make a programme successful initially and identifying the contextual and historical influences at play, the chances of its success elsewhere will increase.

If improvement programmes are viewed as vehicles for supporting culture change and building capability, they have the potential to have broader and longer lasting impact. Hitting one measure at a time will not deliver the transformation needed across the NHS identified by the Berwick review and others.

Dr Jo Bibby is director of strategy at the Health Foundation