What's wrong with our research methods? They have come a long way in 50 years. We can now test new products, procedures, even whole approaches to medical care. Or at least we can in healthcare's technical areas, where measuring the link between intervention and outcome is relatively easy.
Elsewhere in the NHS, particularly community-based care and health service planning, those links are harder to make.
For example, we believe multidisciplinary working improves outcomes in healthcare delivery, but find it difficult to demonstrate.
Traditional medical research is failing to meet all the modern healthcare 'industry's' needs for two reasons: first, such methods require that all variables apart from the one being studied are kept constant. This is easy when examining the impact of a new heart valve replacement operation: we can keep constant the external environment, the anaesthetic and the length of stay while studying the new technique.
For more qualitative interventions, such an approach is harder. For example, we think the best place to reduce the incidence of unwanted teenage pregnancies is in the community, but how can we measure this directly? Are teenagers more affected by TV, the availability of contraception advice, pub opening hours or one of their peers having a baby?
Many changes to the health system are similarly complex; we like to think improving GPs' corporacy by linking them to 'out-of-hours' co-ops will improve patient care, but it would be difficult to isolate and attribute changes to it. So many other factors have changed in recent years that to suggest a link between some grand effect and one small change is ludicrous.
The second factor working against traditional clinical research is timescale. We no longer have the luxury of time;
five years to demonstrate a new heart operation's significance would be nice, but in the hothouse of 21st century advance it is difficult - particularly in a politicised environment where goalposts are objects that slide continually over the field.
Just as the operation may soon be superseded by the next marvel of miniaturisation, so the political flavour of the month may be replaced by new thinking.
Modern policy is so iterative that traditional research and evaluation models are inappropriate.
Does objective, disinterested measurement of our work have any purpose or is it merely an 'academic' exercise in the worst sense? Without experimentation and evaluation, any human endeavour would make little progress.
Evaluation may help shape progress; 'pilot' studies may be test-beds in which some ideas work and others fail, but they may also be instruments of change, disseminating new ideas and innovation. The challenge is to bring evaluative methodologies into the 21st century.
That challenge is being met:
evaluation models are emerging which combine the traditional rigours of objective measurement with pragmatism appropriate for our times. Action research is beginning to gain credibility as an approach that breaks larger projects into many smaller parts and looks at them formatively.
In action research, projects may often be revisited and change reported, even as the projects progress. At each stage, achievements are measured against stated objectives; each time, objectives are realigned with the changing environment (which includes the evaluation itself ). Action research acknowledges that feedback to a project during its lifetime is bound to have an effect, but insists on careful stewardship of the evaluation so the impact can be factored in rationally as the study progresses.
As well as routine data and other 'normal' quantitative information, action research studies are likely to use a more 'people-oriented' approach: faceto-face interviews, focus groups and surveys. These are more likely to identify qualitative change, the cultural and behavioural issues so important to the changing health service environment.
In so doing, action research highlights another shortcoming in traditional evaluation: change happens first in these 'softer' areas. If we look merely for service delivery change, no recent changes in policy direction can be deemed a success.
Development of corporacy, ownership, a multidisciplinary ethos: all are necessary precursors of service change, shifts that have only been noted and their value realised through adopting less medically centred, more sociologically derived evaluation methods.
Measuring such soft changes may be hard, but without them we would head down the slippery slope of reductionist, technocratic healthcare, which is little to do with health improvement and much to do with expensive gadgetry.