Paradoxically one of the most important determinants of healthcare quality and efficiency is one that NHS managers can do very little to influence, in fact it is practically invisible to the managerial gaze: the quality of clinical decision making.
Whether - and how - doctors and other clinicians make appropriate judgments about diagnosis and treatment is central to whether your patients are well served and whether resources are well used. But can you really claim you have any sense of how well the clinicians in your hospital or primary care trust are doing? Or that you have a game plan for helping them do better?
The challenge is well set out in How Doctors Think, a recent book by Harvard Medical School professor of medicine Jerome Groopman - a book concerned with “what goes on in a doctor’s mind as he or she treats a patient”.
“Can you really claim you have any sense of how well the clinicians in your hospital or primary care trust are doing?”
His foundational observation is that in practice doctors make diagnoses based on “pattern recognition” (aka stereotyping) not linear step-by-step decision trees. In other words, doctors don’t collect data then generate hypotheses about possible diagnoses - they begin to think about diagnoses based on first impressions. Most doctors quickly produce two or three possible diagnoses early on, while really talented ones juggle four or five. But all of them use short cuts called “heuristics”.
This line of reasoning has some parallels with Thomas Kuhn’s famous sociological debunking of the idealised scientific method described by Karl Popper and others, in which Kuhn convincingly juxtaposed the actual practice of day-to-day science with its stylised hypothesis generation and testing.
Groopman talks about the way evidence-based protocols under-determine what may be in an individual patient’s interests, based either on the biological particularities of their case, or their specific preferences. These algorithms “quickly fall apart when a doctor needs to think outside their boxes, when symptoms are vague, or multiple and confusing, or when test results are inexact”. So doctors ought to be more individualistic and humanistic in their decisions.
However, Groopman goes on to describe how in practice doctors’ decision making is affected by “cognitive traps” - which cause perhaps 15 per cent of diagnoses to be wrong, not because of inadequate medical knowledge (on the part of either the doctor or medical science as a whole) but because of biases in processing information.
These include “availability biases” - judging the likelihood of an event by the ease with which it comes to mind; “confirmation biases” - confirming what you expect by selectively accepting or ignoring information; “attribution error”; “anchoring” - not considering multiple possibilities; “affective error” - wishful thinking; “premature closure” and “diagnosis momentum”.
These are often made worse when misaligned economic incentives motivate clinicians to do the wrong thing. The issue in medicine is not that incentives don’t work, it is that they work too well, so they have to be very carefully calibrated to support desired professional behaviour. In Britain the tendency of dispensing GPs to prescribe more high-cost branded medicines illustrates the point. In the US, radiologists with ownership stakes in imaging equipment tend to over-investigate.
How then might NHS managers support clinicians in improving the quality of their clinical decision making?
The first part of the answer lies in ensuring enough time is built into job plans for the contemporary version of “deaths and complications meetings”. Clinical governance leads need to ensure clinicians participate - because decision-making expertise is acquired not simply through years on the job, but through exposure to feedback. Every practising clinician should routinely be engaged in this. As a footnote point, it also has to mean hospitals (and commissioners) properly supporting post mortems - whose numbers and funding have been under pressure in recent years.
Second, clinicians need wherever possible to be stripped of economic incentives, where they exist, to over-prescribe and over-treat.
The third part of the answer lies in the selective adoption of appropriate clinical decision support systems - but with proper ability for individual clinicians to override them. Here I am more optimistic than Groopman. Work by clinicians such as David Eddy in California using Bayesian methods of modelling point to the future. It will increasingly include tools patients can use to assign probabilities to their symptoms and treatment options - making sense of the overload of raw information on the internet. (Four years ago Googling breast cancer got you 9.6 million links, today it is 50 million.)
Fourth, patients need to be helped to recognise the reality of medical uncertainty, while helping doctors do better. As Groopman puts it, “doctors desperately need patients and their families and friends to help them think”.
So questions patients should ask that can help jolt doctors out of some commoner cognitive diagnostic biases include: what else could it be? What’s the worst thing this can be? What body parts are near where I am having my symptom? Is there anything that doesn’t fit with your suggested diagnosis? And is it possible that I have more than one problem?
Of course doing all of this is quite a tall order. But the stakes for patients are high. And in some respects it is not a new requirement. Back in 1878 Nietzsche wrote: “A doctor is no longer at his intellectual peak just because he knows the best new methods… he must also have a talent for conversation… the tact of a police agent or lawyer in divining the secrets of a soul… in short, a good doctor today needs the skills of all other professional groups.” As it was then, so it is now.