In association with
In this video, HSJ senior insights correspondent Jack Serle is joined by Sonny Shergill, VP of commercial digital health at AstraZeneca, and Will Ricketts, lung cancer lead at Barts Health Trust, to discuss the role AI is already playing in cancer care, and what role it might play in the future. Between them, they explore where AI might fit in oncology pathways, how workflows might accommodate different levels of digital literacy – both of patients and clinicians – and what considerations need to be made around data security.
Jack Serle
Hello, I’m Jack Serle, senior correspondent at the Health Service Journal. Welcome to the latest episode of the Cancer Project Zero video series. One in two people will get cancer at some point in their life, making it the UK’s biggest health burden. As part of Cancer Project Zero, HSJ has partnered with AstraZeneca to explore the possibility of one day eliminating cancer as a cause of death. Helping us do that are several leading voices from across the UK cancer community. In this episode, I’m delighted to be joined by two of the most important figures involved in Cancer Project Zero. They are Sonny Shergill, VP of commercial digital health at AstraZeneca. And Dr Will Ricketts, lung cancer lead at Barts Health NHS trust. With Sonny and Will, we will explore how AI is shaping the future of oncology care. So no small subject to grasp. And I want to start with no small question, which is, you know, what is the potential here of AI in healthcare, specifically in oncology? Will, as you’re sat next to me, why don’t you take that one first?
Will Ricketts
I think the potential is probably limitless, but equally, I think we perhaps don’t don’t yet know what it is. I don’t think we’ve necessarily grasped it. I feel like AI for, I don’t know – the masses, almost, is very much in its infancy and yet growing exponentially as a service. And I’m sure we’ll come back to it in more detail later, but we’ve looked at the roles of AI all the way through, I think our lung cancer pathways, from early diagnosis to streamlining pathways, to helping with diagnostic tests, to helping guide treatment algorithms, and best treatments for our patients. And that’s I would imagine the tip of the iceberg. I think until we as we do more we will learn more and as we learn more we’ll do more, I think. I think it’ll probably be a sort of exponential, exponential growth.
Jack Serle
Sonny, how do you, how do you see things?
Sonny Shergill
I think Will said it really well. I think for me it’s a big unlock and we’re at the start of a revolution. But it’s an unlock in terms of access to insights, information, tools etc. So where they’re often the preserve of one side of the table when it comes to healthcare delivery, they’re not anymore. And I think that changes the dynamic that we’ll see over the next couple of years. And we’re all on a big learning curve around digital literacy. And I think as that starts to go and this becomes more part of our day-to-day, I think that will further accelerate what value gets generated from from AI.
Will Ricketts
Yeah.
Jack Serle
Will touched on it a bit in his answer, but where are we right now in terms of some of the technologies that are being used in cancer care? I’m thinking of, you know, stuff in, in diagnostics or in treatment or even in, as you say, helping with patient, with workflow and patient flow, that sort of thing. What’s the picture as it stands now?
Sonny Shergill
I mean, I think we’re at different spaces, different parts of that journey as you go through. We’re seeing phenomenal things happening in the diagnostics space. You know, you’ve been working in leading some of those at, you know, your institution, but, you know, in terms of using AI to make current diagnostics better, right into the screening and X-rays and CTs and what have you. I think that’s actually getting better and better established. There’s still work to do when you go further down the stream in terms of how patients actually engage and get supported with AI, you know, developed products as well. But it really depends where you are. I think in terms of diagnostics, in terms of treatment decisions and informing those, I think we’re further ahead than delivery of care at the patient interface. I think there’s work to do there. But again, massive advances happening.
Jack Serle
And in terms of you and your colleagues, how do you view the introduction of AI tools, AI assistance, that sort of thing into the care that you’re providing or into your workflows sort of around that, into more what you might think of the administrative side of things?
Will Ricketts
I think is really interesting now, I think, I think with all new technology and all new ways of working,I think there’ll there’ll be early adopters who are keen. And so I guess people like ourselves but and then there will be people, you know, who maybe want to see how it goes, see how these things develop and evolve. Probably one of the most interesting examples we’ve got at the moment we’re using an AI tool to sort of pre-report chest x-rays so the chest x-rays still get reported by a consultant radiologist, but they get a sort of almost immediate prescreen by an AI tool which basically puts them into two piles. Abnormal or normal. And then the as so often in the NHS, capacity is an issue. The normals are down-prioritised for reporting, and the abnormals are up-prioritised for reporting. We as a trust, or at least one of our sites within the trust, has proactively turned off the AI read for chest X-rays done in the A&E department, the emergency department, because they worried about how that would be, how the rollout would be perceived, and how it would influence trainee doctors, resident doctors interpreting a chest X-ray if they’re looking at the image themselves and looking at the AI read, and no one’s really given them training on how to interpret the AI read. So the AI read is very deliberately set up to be highly sensitive, but not necessarily very specific because of its role in shifting into these two groups. But if you’ve got someone who hasn’t got, as Sonny says, the digital literacy through no fault of their own, they just, these things are moving fast and they haven’t trained to know that that’s how the algorithm is set up. All of a sudden they’ve got a chest X-ray to their eye is normal. The AI is telling them it’s abnormal, and they think, what do I do with this? How do I interpret this in the clinical context of the patient in front of me? So I think that’s an interesting example of where we’re at in rolling out some of these AI technologies.
Jack Serle
Just quickly on the point of sensitivity versus specificity, just sort of clarify that for the uninitiated.
Will Ricketts
So the sensitivity is basically saying I would rather set the tech up to overcall an abnormality that might not be there, because my role or the AI’s role in this is to flag this for a high-priority second read from a clinician. So you would much rather it overcalls a sort of fake abnormality if you like. So something that isn’t there, but it flags it as really being there, so that’s highly sensitive. The other side of that coin is the specificity, i.e. it’s picked up something that either isn’t really there at all or is there, but is nothing to worry about because that’s the trade-off, and that’s how we’re using it, is to get that X-ray in front of the eyes of the expert and down-prioritise the ones that are far more likely to be normal.
Jack Serle
I think this point around being sensitive to, what you introduce trainee doctors to and, and making sure that they’re upskilled both in terms of viewing the images as is and also the AI interpretation. And it really means you have to introduce these things quite carefully, but then that obviously you want to be moving as fast as you can because these innovations can help you with so many different things. So it’s a thorny kind of, dilemma when thinking about changing how people do things and work processes. I mean, in terms of your experience, both in the UK but also overseas. And on that point, of kind of implementing these sorts of technologies, and bringing clinical staff along with you, how have you have you seen that play out and divide that in systems?
Sonny Shergill
Very interesting. It’s a really key topic in terms of making these things sticky. So, you know, the technology is only about like we were saying earlier, about 30 per cent of, the challenge here, the big piece is literacy, getting people to change how they work. And when you’ve got, you know, many, many health systems where everybody’s under pressure, highly fragmented people don’t have time to stop the clock and actually change what you do. It means things have to be really easy. And, you know, I think that technology then has also a fix right there of creating and designing the UX to be really intuitive. I think one other thing on the point you made Will around, you know, as we use more of these tools, we have to also ensure that the infrastructure is there to receive that information. Because if you are now diagnosing twice the amount of lung cancer patients that you know, you were a year ago, the can the infrastructure actually manage that? And obviously in the UK slightly different situation. But in other markets it’s a real issue.
Jack Serle
On the other side of the equation are the patients. How are they viewing the introduction of AI tools into their care and so forth?
Will Ricketts
I think.I think that’s interesting. I haven’t, I haven’t had much in the way of conversation with patients about it. And I think that’s maybe that’s where we are in the NHS in the UK at the moment is with things like the chest X-ray reporting. We’ve also got some AI tools that help us look at some very specific things with CT scans. And I think right now maybe, maybe the patients are blissfully ignorant about stuff running in the background. And because the AI is at a fairly early stage, the tools are very much used to flag and prioritise to aid clinician decision making or aid clinician prioritisation. So it doesn’t ultimately change the clinical decision around the individual patient’s care. I don’t know if you’ve seen more where maybe in services or centres that are a bit more advanced, where maybe the AI is doing a bit more of the heavy lifting in the clinical decision making and how patients might perceive that.
Sonny Shergill
Yeah, I think I think when it comes to things like clinical decision making, you want your doctor to make that decision. I don’t think that’s going to go away how they get there, you know, and what tools they use to get there is secondary, I think. I think the other thing is, you know, we think about us all as being patients, our own level of adoption of technology is going to track with this. So I was in San Francisco a month ago and I went into, you know, used a driverless car for the first time. And initially I was like, am I going to be worried here? And but the experience is so seamless. You forget your there’s no driver in the car and it’s great. And so it’s like the more we adopt technology in our lives, we’ll see that come through from a patient standpoint. But it’s got to be frictionless. You know, I think certainly for healthcare, you know, human in the loop is a key thing and I don’t think that is ever going to go away. But AI-assisted human is going to be able to do so much more
Jack Serle
And it comes back to the point we made earlier about the, the potential here really is limitless. But I just want to pick up a little bit on, a conversation about sort of data and data armed patients. Interesting turn of phrase, but one of the, one of the risks here, of course, is, those people who, don’t have that sort of, access or through digital inequalities or indeed the data itself carrying forward, sort of embedded, inequalities that could, could lead to ethical problems down the line. I’m just keen to sort of flesh out a bit the ethical AI component here. How do you ensure that these tools, and these processes you’re introducing are done so, in a way that’s ethical and going to do no harm?
Will Ricketts
I think it’s interesting that we’ve wrestled with this quite a lot as with not necessarily AI, but tech solutions in general. So, you know, patient facing apps, those kind of things. And are we in some way increasing the divide with our more digitally literate patients getting, you know, better care than our less digitally literate patients and actually we sort of come, always made conclusion that if I can liberate the time so if I can streamline the care of my more digitally literate patients then actually, instead of leaving behind the less digitally literate patients, what that does is that liberates my time to spend more time with them. So actually, it’s a win win. Everybody gets better care. The digitally literate patients get hopefully better care in a manner that suits them because they like that tech interface or whatever. And that frees up my time to spend more time with the patients for whom that isn’t the right thing for them. So I don’t think we should I don’t think we should fear a digital divide. I think we just need to reframe it and think about it in a slightly different way.
Sonny Shergill
No I love that, I think that’s a great point. You know, for me there’s two pieces we have to think about here. To your point around the data that’s gets used to train these models and to run these different, tools that we have. We’re going to go from to more and more specialised data sets. We need to do that. So the model training piece is going to be key. And I think that will inform better outcomes, less [AI] hallucinations, etc. I think we have a great proxy though for digital fluency. And that’s overall health fluency as well because you’re dealing we’ve been dealing with that forever, right. It’s like you have some patients that are really on top of their disease, they’ve done their homework, etc. and then you have ones that have other priorities in life, and their health care isn’t the top of their tree because it can’t be. And where we deal with that, right. So I think that will be a very good proxy for how we deal with digital literacy of patients as well.
Jack Serle
Going back to what we’re talking about with patients and their attitudes earlier, you know, one source of anxiety for people with AI in general, and I’m sure people in, with cancer, experiencing care, is the security of their data and making sure that, it’s not going to be at risk. And I’m sure you can’t mitigate risk entirely, but I mean, is there what do we need to be doing more of in this country and around the world to make sure that people’s concerns there are assuaged?
Sonny Shergill
It’s really interesting. We talk about this topic quite a lot. I think in general quite rightly, people are concerned about data and data privacy and all of the good stuff as well. And then, you know, in the balance of your life, you know, again, that’s a concern with your financial data and all of these things. What we’re finding is when it comes to people’s specific health data, when they have a health issue to deal with, they’re a lot more open about sharing their data. Right. Especially when it’s with their relationship with their physician or their healthcare provider. I think in those settings, it’s a slightly different relationship. We have a burden on us as people within the technology realm to make sure checks and balances governance is all covered as well, for sure. But I think it’s a slightly different dynamic when you look at it as a whole versus a patient managing their own disease.
Will Ricketts
I agree, in fact, I think patients are often surprised how sort of, how tightly controlled their data is, how difficult it is, you know. What do you mean? You can’t access my scan or my X-ray from another hospital at the other end of the country. I think they’re more surprised by that than the other way around. And again, it’s different. But it’s not new. You know, when I was a very junior doctor, one of the consultants used to drive around with piles of patient notes in the back of his car. That’s probably as big a risk and would probably shock patients far more than if you have, you know, some other kind of data breach or cyber attack or whatever. So I think it’s not new. It’s just changed.
Jack Serle
And I’m sure there are no consultants doing that now and have all learned their lessons, I have no doubt. Sticking to this sort of risk mitigation and opportunity element, though, and thinking about regulators and they have a vital role to make sure they’re approving tools and products and innovations, that are going to be safe and that clinicians and patients are confident in that judgment, whilst at the same time not stifling innovation and finding that right balance. And in this country, regulators in this country have been doing that, working on this specifically with AI, for a while. Do you think they’re getting the balance right, do you think where how are they getting on? And, what do you want to see from them to make sure that they keep striking that path.
Sonny Shergill
You know, again, topic all over the world. Same thing whether you’re with the FDA in the U.S. or you’re over here in the UK. I think the challenges that in the sort of first iteration of digital tools and digital technology, that technology was end to end created is a product. And then it got approved and reviewed and then used or not used now, and certainly with Gen AI these things are continually learning, changing features and getting at it. And that’s a challenge from a regulator standpoint. It’s like when do you say okay, this is safe? And then how do you build in checks and balances as these things evolve? I think that’s one of the struggles. And from a regulator standpoint, the flip side is having regulation is actually a very good thing. And, you know, certainly for physicians, hospital systems, pharma companies, we’re used to working within a regulated environment.It gives you your checks and balances. So it gives you hope, you know, can be a little bit of a superpower, actually, rather than not being experienced in navigating, you know, highly regulated areas, quite rightly as well, like healthcare.
Jack Serle
Having that in premature that, that stamp of security. I mean, it is that, how important is it that to you in your, in your work?
Will Ricketts
I agree I think it’s really important. You know, we’ve got a project we’re trying to roll out at the moment where we’re trying to use an AI tool to streamline our MDT meetings, and the lead time is kind of frustratingly slow. But actually, as you go through it, you see how important it is.It is gone through the information governance team. It’s gone through the IT team, it’s gone through a clinical risk assessment. And actually, I think that is really important because you you want to get it right. And the last thing you want to do is have a, what would have otherwise been a brilliant tool that, you know, something goes wrong because you didn’t go through that process and you set the whole project back, or even you know, the tool back by years because people hear about, you know, this horror story that happened with this piece of, AI because we didn’t go through the process. So I agree with Sonny. I think actually that, think governance can be frustratingly slow, but it’s incredibly important to get it right.
Jack Serle
I wonder, you know, your feelings on putting your getting your crystal balls out and thinking 10 years from now, what, what what’s cancer care going to look like, drawing on all of these new and old – chest X-rays, for example, perfect example – of new and old technologies to improve that pathway, to improve patient experience and outcome. And how’s that going to how’s that going to look in a decade’s time?
Will Ricketts
Ten years is a long time. If you roll back 10 years, or maybe 12-ish years, lung cancer wasn’t a very attractive field for us. You know, there was just traditional chemotherapy and nothing else. And you look at the drugs that have come on in that time we PET scans of kind of just about within that window. EBUS is just about within that window. So our diagnostics have changed phenomenally, our treatment changed phenomenally. Trying to think forward another 10 years, I agree, I think personalised care is a lot more where this is going, and I think that probably will be AI supported because who knows. There’ll be a blood test that we don’t even think about. You know, you see it on your screen, but you don’t engage your brain in it. And actually you find out that that helps predict your response to whatever drug, or helps predict your risk of lung cancer recurrence. And it’s something the AI might pick up on, that we haven’t there might be something about how the AI interprets the scans. I think personalised care around the best treatment for patients. I think personalised care around the best follow-up regimes for patient. At the moment, we’re very clunky. Had your lung cancer surgery and you get your CT every year for five years. Because it’s what we’ve always done, not because there’s any you know, but what if something about your tumour predicts that you’re a higher risk of recurrence and therefore you should get your CT every six months? There’s something about your tumour says you’re really low risk of recurrence and actually you don’t need the extra scans. And maybe you have CTs every two years, but, you know, maybe for ten years instead of five. So I think that kind of personalised care and as Sonny says personalised care about the right treatment for you. It might be that, yeah, the guideline says you’ve got this mAB, and this mAB as two options. But there is something in the AI algorithm that flags. Actually, mAB one is better for you than mAB two, even though they both have essentially the same licensing.
Sonny Shergill
I think you can work that back up the chain as well. So we’re very used to now as an industry to say, okay, we’ve got this innovative medicine. Well, there should be an innovative diagnostic that goes with it. Certainly oncology. I think you’re also going to get a third layer what’s the innovative technology that goes with that as well. It’s already happening. But to your point, how do you help define whether that is exactly the right, you know, treatment for the specific patient? What’s going to be their outcome? What else do they need? I think that will become more of a reality probably sooner than 10 years.
Jack Serle
Well, I hope so. But I think that this speaks to something that I find fascinating. This whole area, which is often we’re talking about marginal benefits or marginal gains. It’s an extra half hour for some consultants each week. It’s not having to come in every year for a CT coming in for as and when, which to the individual is a benefit. But then you build them up and you aggregate them across a population or across a health system, and you can really start to see sort of tangible, massive gains, which I find quite exciting, I think it’s going to be very interesting. Well, thank you both for your time. This has been a really stimulating conversation. A big thank you to Dr Will Ricketts and Sonny Shergill for joining me today for a really fascinating conversation. We appreciate you tuning in to the latest episode of AstraZeneca’s Cancer: Project Zero video series. In this episode, we explored the role of AI in cancer care and how it can improve the UK healthcare landscape. We hope our conversation has encouraged you to play a part in building a future where zero people die from cancer. Be sure to watch upcoming episodes for more valuable insights from oncology leaders.
The views expressed are those of the individuals and not those of AstraZeneca or any other organisation
To find out more about AstraZeneca’s Cancer: Project Zero campaign, click here.