'If we are interested in productivity, quality of output and whether services are being delivered on time and at the right cost, theatre use is not the right measure.'

Healthcare organisations collect vast amounts of data on patients, utilisation, processes, waiting and throughput, but we appear to find it difficult to measure productivity.

For example, an acute trust might claim its theatre utilisation is 95 per cent. So what? What does that mean? How can it influence business decisions? And how could we possibly use it to inform where improvements can be made?

All activity should contribute to the quality, cost, delivery (QCD) and safety of the services we provide. If it does not, we should question why we are doing it.

QCD aligns well with organisational priorities, with the proviso that the priorities need to be correct to start with. If the priorities are incorrect and measures are applied to them, the results will be undesirable.

'Quality', for example, might mean reducing MRSA or serious untoward incident rates. 'Cost' links directly to maintaining financial balance and effectiveness. 'Delivery' could be an activity that contributes towards meeting the 18-week target or the faster turnaround of instrument sets within sterile services, or samples within diagnostics.

Measures of QCD are a good place to start when evaluating productivity, but only if they are used to inform decisions. Theatres, for example, currently measure utilisation, which is simply a measurement of activity. In broad terms it is:

Anaesthetic + operating time divided by funded theatre time (excluding recovery) x 100 = theatre utilisation, eg 11.5 divided by 12 x 100 = 95.8 per cent

This, however, is not necessarily a good measure of productivity; it is possible for a surgeon to carry out one procedure and take 3.5 hours of a 4-hour session and the theatre utilisation would be 87.5 per cent, while another surgeon carrying out the identical procedure might complete two of them in 3.25 hours and the utilisation would be 81 per cent, even though the productivity would be twice as great.

If we are interested in knowing whether theatres are productive, whether the quality of their output is good and whether they are delivering a quality service on time and at the right cost, theatre utilisation is not the right measure.

An alternative might be a measure comparable to overall equipment effectiveness (OEE) in manufacturing.

An OEE measure makes the link between the quality of outcomes and the ability of theatres to deliver. When combined with a second measure, such as theatre cost per hour per person, we might be coming close to having measures of theatre performance that tell us exactly what is going on and allow us to compare ourselves directly to our peers. A measure for this might look like:

Availability = ((planned run time - down time) divided by planned run time) x 100


Performance = ((operations performed x designed cycle time (knife-to-skin time)) divided by actual run time) x 100


Quality = ((total output - defects) divided by total output) x 100

If we took a theatre that is available from 8am until 8pm where work stared 30 minutes late and measured utilisation, it would be (10 hours divided by 12 hours) [WHY?]x 100 = 83 per cent

If we took an overall theatre effectiveness (OTE) measure for the same theatre and counted performance as knife-to-skin time it might look more like:

0.83 (availability) x 0.75 (performance) x 0.9 (quality) = 0.56 = 56 per cent (OTE)

The two measures should reflect the same situation but the first measure indicates that all theatres are fully utilised and that we have little room for improvement; the second would suggest otherwise. Do we want to continue to measure the first because it's easy, or the second because it's valuable?

Through the second measure we can see the level of opportunity for improvement. This enables us to start to examine processes to increase productivity, quality and availability (QCD) through the use of root cause problem-solving techniques and other appropriate tools.

We need to recognise that we either need to measure and perform to a variety of targets and measures, or we can collect valuable data that relates to our performance as an organisation. We will then be in a stronger position to make informed, fact-based decisions.

Andrew Castle is service improvement consultant at the NHS-funded South West London Improvement Academy.