Access and appropriate use of healthcare could be improved and people could have more control of their health with the use of artificial intelligence, writes Halima Khan 

Babylon health 2

Babylon Health, the company behind the NHS GP at Hand app, has said that its software achieves medical exam scores that are on par with human doctors.

Currently the UK version of the app allows users who work or live in central London to chat to an Artificial Intelligence about what might be wrong with them, as well as to talk to a doctor, all through their smartphone, for free.

The announcement from Babylon has prompted robust exchanges between the clinical community and Babylon’s own team. Why such heated exchanges? Partly because the stakes are high. Will products like these help an overstretched health system or exacerbate current problems further? Will they become an invaluable aid to doctors and patients or a source of frustration that cuts humans out of the loop?

Overall enhancement

We need to ensure today’s AI, as well as tomorrow’s, enhances the system overall: for citizens and frontline professionals, not just individual technology providers.

AI products could help solve a major issue: how to reduce the unnecessary use of health services

Apps like Babylon Health’s GP at Hand have a potentially important role to play, as long as the risks are fully acknowledged and addressed. As we argue in Nesta’s report Confronting Dr Robot, an important way to get the best outcome is to ensure that patients and healthcare professionals are at the heart of how AI-based services are developed and used.

So, first, what’s the case for AI-enabled health like Babylon’s GP at Hand? If developed correctly, AI products like these could help solve a major issue: how to reduce the unnecessary use of health services.

We know that 20 percent of GP appointments are for minor medical problems that could be treated at home. This isn’t to mention the number of people who, for one reason or another, cannot get a GP appointment when they need it. Progress here could take significant pressure off the health system, and improve patient experience.

New providers coming to market such as Babylon, Your.MD, and Ada are responding to this, focusing on diagnosing common ailments and suggesting courses of action. Through an app or a “chatbot”, this technology can elicit symptoms and interpret these to offer self-management hints or advice to seek medical help.

This could be a big step forward from getting conflicting and unverified opinion from the internet. Access and appropriate use of healthcare could be improved and people could have more control of their health.

But there are also risks that need to be managed and mitigated to avoid AI based products like this worsening current problems in the health system.

Manage risks

The first relate to problems arising from AI being implemented beyond the scope of its competence. Today’s AI performs very well at narrow and well-defined tasks- looking at a medical image or, as we now see, answering an exam question.This does not mean that AI can bring to bear the full range of skills and knowledge required to be a doctor.

The combination of skills, experience, context and judgement at play in a consultation with a patient with complex and multiple long term conditions far outstrips current AI. Implementing AI beyond its competence risks patient frustration at best, and unsafe advice at worst.

Common ground might be built on a recognition that some human qualities are unsubstitutable and that machines have limits and should serve collective interests, while also recognising that humans are fallible and can benefit from well designed machine intelligence

A second set of risks relate to bias and inequality. GP at Hand discourages more complex patients from subscribing to the service, leading to accusations of cherry picking which need to be resolved. If AI becomes commonplace at advising on who would benefit most from treatment, people who describe their symptoms differently, have hard to diagnose conditions or aren’t online may find it harder to access the right care.

A third set of risks relate to what effect the technology will have on demand for services. If AI-based triage products are to alleviate pressures, the advice needs to be robust, appropriate and patients must have confidence in it. Risk-averse advice and false positives could generate a flood of unnecessary demand, putting additional burden on already over stretched accident and emergencies.

For many, the biggest question of all is whether doctors will be replaced by AI-based technologies.

The next few years will be more likely to see AI influencing when you see your GP, and what you tell them: becoming a front door to the health system. This is a highly influential position, with commensurate responsibilities, but not a complete replacement for human interaction.

Getting to a good outcome will be more likely if concerns are genuinely listened, and responded, to. This would help generate greater trust which in turn could open things up, rather than ratchet them further.

Common ground might be built on a recognition that some human qualities are unsubstitutable and that machines have limits and should serve collective interests, while also recognising that humans are fallible and can benefit from well designed machine intelligence.

Nesta’s new Centre on Collective Intelligence Design aims to find ways to combine the best of machine intelligence with the best of human intelligence.