'Science might come up with some interesting ideas, but ultimately the likelihood of adopting evidence is driven by our values. Mass sterilisation of young men might be an effective method of reducing teenage pregnancies, although for good reasons it would not be considered practical'
Nobody would dispute that we should do the best for patients. Despite the noise of clashing swords and conflicting priorities that fill the corridors of trusts up and down the land, the desire to make and keep people well unites practitioners and managers.
Getting it right for patients is, at least in the first throes of our working lives, one of the reasons why we get out of bed in the morning (and as time goes by, why we might lie awake at night).
There have been several initiatives to better meet the needs of patients. Greater patient involvement is one, improved access is another. Other attempts to improve patients' experiences focus on safety and the drive to change clinical interventions in light of what we know works: evidence-based practice.
Evidence of what works in preventing illness is less developed than in clinical practice and it seeks to address more complex issues. There are few certainties when it comes to understanding behaviour. For example, the context in which people make choices relate to a number of personal resources, including money, time, knowledge, willingness, opportunity, support, medical history, confidence and others beside. Evidence that captures these facets adequately is often hard to find and even harder to apply.
Finding what works
There are several reasons for these difficulties. Few resources have been invested in examining what works. Research tends to follow drug trials, where the corporate money is. Elsewhere, investment has been inconsistent. For example, HIV prevention research was funded in the 1990s but not sustained.
And there are difficulties in assessing which types of research should be considered legitimate. Is it feasible to build a heart disease prevention programme for a city based.on the results of a study with 10 participants, for example? Similarly, are methods that might be considered the gold standard in drug trials appropriate when considering the mess of people's everyday lives? Scientific methods might not ask the right questions or produce the right answers.
Sticking too rigidly to the evidence base might stifle innovation - if we can only pursue what we know works, where do we learn new things? And if we agree that we should support interventions which are known to work, we might be in danger of sliding into a position that says we should only pursue what has been proven. The fact that something is unproven, or untested, does not mean that it is not effective. It simply tells us that more research might be needed.
Putting it into practice
Even if the science is right and sound knowledge is generated, what might be classified as effective by researchers might not be feasible for practitioners. When an intervention seems to be working despite established evidence to the contrary, what incentive is there to change? Similarly, keeping abreast of changes in perceived wisdom can be daunting for practitioners on the front line.
Science might come up with some interesting ideas, but ultimately the likelihood of adopting evidence is driven by our values. Mass sterilisation of young men might be an effective method of reducing teenage pregnancies, although for good reasons it would not be considered practical. Even the most scientifically defensible methods might produce socially unacceptable results.
The challenges trouble colleagues from academia and practice alike. However, there is a glimmer of light. How might patients, in whose name we enter into these debates, help scrutinise evidence and find solutions? Building on the groundbreaking work of the National Institute for Clinical Excellence, solutions might be found in working with patients and sharing responsibility for difficult decisions. Their well-being is what unites us, after all.