There is an old adage, 'if you can't measure it, you can't manage it', and measurement has been a big ingredient of NHS performance management.

But measurement for improvement rather than for judgement has been an important element of our work at Luton and Dunstable Hospital foundation trust in improving patient safety. The ability to quantify improvement, being able to put in place a change and then measure the impact can motivate frontline staff and has become the norm in reports to our board.

Measurement for improvement enables us to assess progress, improve care and prevent harm. Displaying results on ward noticeboards and in departments has become standard and promotes healthy competition between wards and openness to the public. We need to go further and use our websites to show how safe our hospitals are, what the standard mortality rate is, our infection rates and our adverse events.

Implementing the Safer Patients Initiative required 35 new measures to track the 29 improvement initiatives. Measurement has been vital, but it took us over 12 months to put this in place. The NHS is not yet geared up to measure the harm we might cause to patients and national work is needed.

According to Institute for Healthcare Improvement president Don Berwick, organisations and individuals often react to data in four different stages:

  • the data is wrong;

  • the data is right but it is not a problem;

  • the data is right, it is a problem, but it is not my problem;

  • the data is right, it is a problem and it is my problem.

If frontline staff themselves agree what will be measured, they are much less likely to rubbish the data or spend fruitless hours debating "why things are different round here".

Measurement is central to reliability. Reliable systems are a prerequisite to high-quality, safe care. You cannot know your system is reliable unless you measure its processes.

We found this with the introduction of care bundles in the intensive care unit - clinicians believed we had implemented good practice many years earlier through critical care networks. But when we began to measure compliance with every element of the bundle, it became clear we were only about 30 per cent compliant and we had to work hard to get to 95 per cent and better.

The decision to measure something can throw up surprising results. When introducing an early warning scorecard for routine patient observations, it became clear many staff had lost the art of recording respiratory rates. So we had to put in hospital-wide retraining and we took a year to reach a reliable system that allowed us to reintroduce the scorecard and other components to better manage acutely ill deteriorating patients, with an eventual 50 per cent reduction in cardiac arrests.

Measurement alone is not enough, but combined with patients' own stories of harm, it can win the hearts and minds of the most resistant clinicians.