Artificial Intelligence (AI) and medicine

27 February 2020

Interview with 

Chris Smith & Phil Sansom, The Naked Scientists

ARTIFICIAL INTELLIGENCE

Artificial intelligence (AI)

Share

Chris Smith and Phil Sansom delve into the world of artificial Intelligence (AI) to find out how this emerging technology is changing the way we practise medicine...

Mike -  I think this is an area where AI stands a really good chance of making quite dramatic improvements to very large numbers of people's lives.

Carolyn - Save lives and reduce medical complications.

Andre - Solid algorithms aiding physicians in some of their greatest challenges.

Beth - That’s a concern - when machine-learning algorithms learn the wrong things.

Andrew - Frankly revolutionary productivity that we are now starting to see from these AI approaches in drug design.

Lee - It will replace all manual labor in all research laboratories. And then suddenly everyone can collaborate.

What is AI?

Chris - AI - artificial intelligence. For many, it’s a term straight out of sci-fi, conjuring up visions of utopias or dystopias; from films ranging from the Terminator; to I Robot; to, well, the film “AI”!

Phil - But what was previously sci-fi is now closer to reality. AI technology exists, and there’s a brand new frontier where it’s being applied to the world of healthcare. We’re seeing AI helping to diagnose cancer, AI designing new medicines, and even AI predicting a person’s medical future.

Chris - But this isn’t the AI you see in the movies. In the words of Kent University computer scientist Colin Johnson, “this is more software than Schwarzeneggar”...

Colin - When scientists say AI, they often mean some piece of code that's running on a computer and it's taking some inputs. So if it was doing medical diagnosis, it might be taking scans and processing those and trying to generalise from it. So it will take, say, a thousand examples of these scans and the diagnosis that people had and build what's called a model, a kind of mathematical formula, that tells it how to predict when it sees a new example.

Phil - In some ways, these predictive algorithms are just an extension of the tools scientists have always used to analyse data: statistics. The only difference is how complex and layered they can get.

Colin - AI varies in complexity from things that can run on your laptop to things that require huge networks of computers. One approach that's particularly been common in recent years has been deep learning. Let's talk about that in the context of computer vision - computers learning to see and recognise. And deep learning would start by recognising colours and lines, and then the next layer would recognise shapes, circles, corners, textures, and so on. All the way up to the final layer where it's recognising whole objects. It's able, for example, to tell apart cats and dogs.

Phil - Could I download an AI to my computer?

Colin - You could download some code to do AI, what are called open source projects, projects that are made publicly available.

Phil - And code comes in lines, right? How many lines of code would I be getting?

Colin - Thousands and thousands of lines of code. But I think the complexity is not necessarily in the code as much as in the data that you'd need to train it.

Phil - Okay. Say I wanted a dog identification robot.

Colin - Yup.

Phil - And I had a picture of every dog in the world. How close would I be to the top AI systems that exist in the world today?

Colin - Pretty similar, to within probably a couple of percent of what something that was trained on a huge supercomputer could do. And that's facilitating revolutions like self driving cars. The ability to recognise road signs and pedestrians and other vehicles needs to happen in a small machine that can sit inside your car.

Phil - But while I could make my laptop very good at identifying objects in pictures, apparently there are other jobs it would find much more difficult - like identifying language.

Colin - They're very good at translation, but they're very bad at converting language into something that we might think of as understanding, particularly visual understanding. “Can a crocodile run a steeplechase?” That's a piece of language. We immediately convert that into an image of a crocodile trying to jump over large hurdles and we know that that's not possible. But for a current AI system that doesn't have that capacity for visualisation.

Phil - Are you saying Colin, that my dog translation robot isn't as easy to get?

Colin - I don't think you can do that. No, I don't think we could translate the language of dogs.

MEDICAL CARE

Chris - Phil, I’m sorry that Colin crushed your dreams of dog dialogue - but you must admit, the degree to which these algorithms can effectively learn from the data they’re given is pretty astounding. It’s also why some people refer to this as “machine learning” rather than the more general term “AI”.

Phil - It seems that computer vision - recognising patterns in images - is one of the places where machine learning excels. This is where healthcare comes in, because doctors spend lots of time examining scans or images. At Stanford University, Andre Esteva is applying machine learning to the diagnosis of skin cancer.

Andre - So we built computer vision algorithms that could, given an image of someone's skin, detect any lesions that might be concerning, and upon zooming into those lesions, diagnose them.

Phil - And does it work?

Andre - It worked really well, yes. We demonstrated that the algorithms are actually as effective as dermatologists at identifying if a lesion was benign or malignant.

Chris - To create algorithms that are as good as actual doctors, Andre had to teach them, by feeding them a large amount of data...

Andre - We collected a dataset of 130,000 images that were comprised of over 2000 different diseases.

Chris - Some of those images were used to train the algorithm, and others were used to test it afterwards, to ensure it actually worked.

Andre - The algorithms that we developed got a really good sense which ones are more concerning, which ones are less, and with that we were then able to fine tune it to work specifically well on skin cancers.

Chris - And not only could the AI distinguish a cancerous lesion from a normal one, it could even diagnose multiple lesions at once.

Andre - We actually built an AI that could take such a patch of skin with many lesions, and automatically zoom in on the ones that were most concerning.

Chris - This is just one example of how AI can help doctors with their work. Around the world, researchers are training algorithms to analyse scans and other medical information, including the DNA of cancers to track how the disease behaves and make predictions about the best treatments. Critically, in each of these examples, AI isn’t replacing a doctor so much as helping a doctor with the heavy lifting.

Andre - I often describe AI as having a precocious resident following you around in clinic, being able to provide second opinions and surface questions which you might not have considered.

AI IS SPECIFIC

Phil - With all these achievements, it’s tempting to imagine robot doctors of the future. But according to Oxford University computer scientist Mike Wooldridge, and author of the book The Road to Conscious Machines, that’s unlikely...

Mike - In the last decade, we have seen breakthroughs in artificial intelligence, but you need to be very careful when you talk about a breakthrough. Those breakthroughs are in tiny, narrow little areas.

Colin - So a system that's built and trained to do, say, medical diagnosis won't be the same artificial intelligence system that's, say, playing a game of chess.

Phil - That’s Colin Johnson again. Andre’s Esteva’s skin cancer AI, for example, won’t become sentient - in fact it can’t even do what Kris’ blood flow algorithm can do.

Colin - Current AI systems are very specific and they don't have motivations. They're doing exactly what they are told to do.

Mike - It can't explain what it's doing. It can't generalise its strategies and explain them to you or me. It can't tie its shoe laces or cook an omelette or ride a bicycle. We can do all of those things. Human beings have a much, much richer, much more general intelligence and capability than anything we can build now or anything that we're likely able to build in the near future.

Mike - I think it is extremely unlikely that there will be some kind of intelligence explosion as happens in the Terminator films - you know, the idea that intelligence suddenly multiplies overnight, and machines become sentient, and it’s out of our control overnight. Why isn't it very likely? Because we've been trying to build intelligent machines for the last 70 years and frankly despite the fact that they can do some very narrow tasks very well, they are actually not that smart.

GLOBAL HEALTH AND PREDICTING THE FUTURE

Chris - So AI is not without its limitations, but there are some truly massive problems that it can help us to tackle. Mike Wooldridge again.

Mike - If you look at what makes healthcare expensive, one of the key challenges is expertise. Training up a doctor takes a long time. There aren't very many people who can do it. It requires a very special set of skills. It's very, very expensive, very, very time consuming. What we can do with AI is we can capture that expertise and we can get that expertise out to people in places where, at the moment, it's just impossible.

Chris - Crucially, the poorer parts of the world - where medical care is in short supply - might really benefit from software that eases some of the doctor’s burdens.

Mike - A nice example from here in Oxford is in a company called Ultromics. They do ultrasound scans for hearts. Now, if you've ever looked at those ultrasound scans, it's impossible to figure out what's going on. The people that have the ability to interpret those ultrasound scans and detect abnormalities, that skill is very, very scarce. What Ultromics have done is they've taken records of ultrasound scans over a decade long period, and they've basically given that information to AI programs and they built systems that can detect abnormalities on these ultrasound scans. And they've got FDA approval, the Federal Drug Administration in the United States, so they can go live with this technology. And what that means is that a doctor in a remote part of the world with a handheld ultrasound scanner connected to their smartphone, they can do an ultrasound scan and they don't have to have that expertise themselves. That scan can be uploaded securely to a repository in Oxford, automatically analysed and they get that information back. So what that means is we'll be able to get out healthcare to huge numbers of people that just don't have it at the moment.

Chris - We’re in very early days here, because a lot of these technologies are right now getting off the ground. That’s partly because they rely on a) a certain amount of IT infrastructure, and b) a good supply of data that applies to the patients.

Mike - And I know a lot of people are concerned about the idea of an AI program doing healthcare for them. That is I think, a rather first-world concern. I think for a lot of people in the world, the choice isn't between a person looking at your ultrasound scan or an AI program looking at your ultrasound scan. It's the AI program or nothing. And that I think is a real huge potential win for AI technologies in the decades ahead.

Chris - And moving beyond diagnosis; some are starting to use AI to predict the future. Carolyn McGregor from Ontario Tech University is doing groundbreaking work here in paediatrics.

Carolyn - We can monitor premature infants, and those born ill at term by monitoring their breathing, their heart rate, and their oxygen levels in their tiny bodies. We use AI to detect and predict when the behaviors of these are changing, and we classify the changes into the likely set of conditions causing the change. This has great potential to save lives and reduce medical complications.

Chris - The project is called Artemis, and it’s particularly important because of how vulnerable these babies are.

The challenge for these preterm infants in particular is that they're trying to complete the development outside of the womb and doing that presents them with many challenges, and it means that they're susceptible to many different conditions that they can develop, and many challenges in the development of various organs.

Chris - Artemis is designed to run in real-time, to help doctors with information that a human would find difficult to process - which, like the example of ultrasound scans earlier, could ease the burden off of doctors in poorer countries.

Carolyn - What we’re looking to do currently is deploy a version of Artemis for a hospital in India. Now this is interesting because we’re demonstrating how we can use the same techniques to support infants in low-income settings. This is very important, as the health outcomes for preterm infants in countries like India and areas of Africa are much worse than Western countries.

Chris - AI seems to do a pretty good job of predicting medical futures in many different ways - as long as it has the right data. Which, according to Mike Wooldridge, we’re beginning to give it.

Mike - We will be able to monitor our physiology on a 24 hour a day, seven day a week basis and that information is going to enable us to manage our health on a much better basis than we can do now. I have colleagues who think that you will be able to detect the onset of dementia just by the way that you use your smartphone. Just by looking at the pattern of usage, by the way that you search for a contact in your contact list or the way that you scan your email. As those patterns change, as you start to get the very, very early signs of dementia, it could be the smart phone is going to be able to detect that on your behalf, long before there would be any sort of formal diagnosis.

Phil - Coming up after break - AI that can invent new medicines, and peering inside the black box.

DATA AND BLACK BOXES

Phil - After all this talk about predicting your medical future with huge amounts of your personal data, it’s worth briefly taking a step back. Cambridge University’s Beth Singler researches the implications of the machine learning revolution.

Beth - AI also doesn't work unless you have large amounts of data, so it cannot progress in particular directions unless it has access to human subject data. Large companies are probably less of a concern than some of the user chosen apps, that there's something like 320,000 medical apps available through app stores and that's a concern as well. We need to be protective of our data going forward.

Phil - And not only do you need to trust who has your data, but once the data goes in, it’s often a complete mystery what the algorithms will do with it. Colin Johnson again, followed by Beth.

Colin - One concern is that they are a black box. You don't understand what's going on within them. The understanding is very distributed across thousands or millions of little mathematical formulae and little pieces of data, and this is potentially problematic because if you're using these systems for something important, like making medical diagnoses or making decisions about job applications, it can't explain necessarily why it's made the decision it has.

Beth - They don't always learn the things you want them to learn. So for example, in looking at cases of pneumonia, a deduction was made using an AI system that actually, people with asthma shouldn't be treated as much because looking at the historical data, people with asthma seem to do better overall when they caught pneumonia. But actually what was happening was humans were triaging them more, giving them more attention because they had asthma.

Phil - A human would probably have made that link - but a computer just sees the data in black and white.

Beth - Every piece of data that we would want to put into these systems is either short-cuttable in that way, or comes laden with its own human inputted biases. So for example, in the case of women seeking treatment for pain, historically women are less likely to receive pain killing medicine in response to pain than men are. And they're more likely to have to go back and back, back again to the GP. So all that data gets into the system, to the extent that then any kind of machine learning system is going to say, if you're female, you don't need treatment in the same way that if you're male. So this kind of form of algorithmic bias is something we need to be really careful about.

DRUG DISCOVERY

Chris - When you give an algorithm data about people, any biases in that data can affect a person’s health outcomes. But there’s a whole other area of medical science where the relevant data isn’t about individual people, but where AI could go on to save lives on a massive scale. We’re talking about drug discovery - inventing brand new medicines. Mike Wooldridge.

Mike - The pharmaceutical industry, although it's ultimately about designing and building new drugs, more than anything I think it's the quintessential knowledge-based industry. It relies heavily on processing large amounts of data and being able to make extrapolations from that data. And so I think it's very, very well positioned to be able to make use of new artificial intelligence techniques and machine learning techniques in designing those drugs and understanding their consequences.

Chris - This area in particular has recently become a massive, multi-billion dollar industry. Every big pharma company is getting in on the action. And it’s starting to pay off, because recently a company called Exscientia announced a world first.

Andrew - This is the first time drug designed by AI will be tested in humans: DSP-1181, just starting phase one clinical trials, for the treatment of obsessive compulsive disorder.

Chris - That’s Exscientia CEO Andrew Hopkins. To create their drug, they used complex machine learning techniques inspired by the way evolution works in nature.

Andrew - We can generate millions of potential ideas, inside the computer. And then we can use all of the data that we can collect from patterns, from published scientific articles, we can take all that data, and we can build predictive models. But actually, one of the real challenges we also face is that whenever we’re starting a new project, it’s actually just on the boundary or sometimes just outside potentially the limits of our ability to predict with machine-learning models. So therefore we need a different set of algorithms to help us in this learning phase. It’s a set of maths we call active learning. And active learning actually, it’s not about just picking the fittest compound, but it’s actually about selecting the most informative compounds to then make and test, and improve our models, and improve our predictions. And this is actually why we’ve seen the frankly revolutionary productivity that we are now starting to see from these AI approaches in drug design. We discovered the drug candidate molecule that’s now going into the clinic, in about 12 months. A fifth of the time it normally takes.

Chris - Part of the reason drug design normally takes so much longer is because making a drug isn’t just about helping the body in a specific way - it’s also crucial to simultaneously avoid harming the body by hitting the wrong target. Essentially, it’s about designing a key that fits only one lock and doesn’t accidentally open any others...

Andrew - It’s not just about designing a specific key to fit a specific lock. We also need to design that key so it avoids fitting maybe 21,000 other locks, which is effectively the number of proteins expressed by the human genome. Because, by hitting those other proteins, it actually potentially causes side effects. So what we have then, is a very difficult design problem, which potentially runs into a very large number of dimensions. This is exactly the type of problem we believe artificial intelligence can be used,  then, to satisfy this large number of design objectives.

Chris - Other objectives include making sure the drug can actually be manufactured easily, and that it can be taken up by the body. With so many potential pitfalls, it was particularly important that Exscientia’s algorithms were not a complete ‘black box’.

Andrew - The beauty of the algorithms is that we can then trace the contribution that every atom is making to all the design objectives which we are designing against.

Chris - Their new drug, DSP-1181, isn’t ready for the shelves yet - clinical trials take many years, and this is a part that the algorithms definitely should not be doing.

Andrew - How a drug is designed - whether it’s by humans or artificial intelligence or a combination of the two - that does not change how we want to then test for safety, and test for efficacy. One thing that’s important is to know which are the really important battles that AI can make a difference to. And we can make a difference to how we can rapidly discover compounds, and the cost it may take to discover a new medicine. And the speed of bringing it to the clinic. But also we must remember that human biology is incredibly complex. It would be a mistake for people to think that AI can allow us to predict all the possibilities of how a medicine may interact with the human body.

CHEMPUTER

Chris - In the next few years, we might see more and more drugs designed using this kind of evolution-inspired AI. And soon after, there might be some basic manufacture and testing by AI as well - thanks to devices like Lee Cronin’s “Chemputer”.

Lee - The chemputer is the world's first general purpose programmable robot that makes molecules on demand. The reason I set out to make this was actually to make a chemical internet that would help me search for the origin of life, believe it or not. We couldn't get funding for that on its own, and I figured that the same technology we use to search for biology would also be very good in drug discovery and making molecules and personalising medicine.

Chris - Like the AI that works in medical diagnosis, the chemputer was originally designed to remove the grunt work out of chemistry so the chemist could be free to do the interesting parts. It consists of both software and hardware.

Lee - It looks like a normal chemistry set actually, round bottom flasks, conical flasks, test tubes, pipes and things.

Lee - We have to feed in some chemicals, like putting ink into a printer, and also we put in a code and that code has two parts to it. One is a graph which is literally understanding where those chemicals have to be moved to. And the other is like a recipe - like cooking a souffle - what temperature, for how long, and what ingredients must be added together in what order. So we can make the perfect chemical souffle, if you like, every time, correctly.

Chris - The result works like a 3D printer for molecules - but Lee started to apply AI to help the chemputer course-correct.

Lee - A bit like how an automated car works, the chemputer can drive perfectly when all the instructions are correct, but what about if something goes wrong or something is not quite as expected? Because we've put some sensors into the chemputer, it can feed back and say, "Oh, there's something a bit wrong here with the heating" or "we don't need to stay at this temperature for quite as long as we thought. Let's make another decision." And so what we've been doing in the last few years is integrating AI into the chemputer.

Chris - This combination of sensors and machine learning meant that the chemputer could start learning from, and experimenting on its own recipes.

Lee - Now we don't tell the robot to make molecules. We tell it to make molecules that have properties. Say we want a blue thing or a nano thing. We're able to dial this in and make a sensor for a blue nano thing and then the chemputer is able, if you like, to search chemical space randomly to start with, and then use a series of algorithms to focus in to say, is that bluer? Make it bluer, more nano, more blue. Yes, hit stop. And it's literally the ability to make a closed loop system where you have molecular discovery, synthesis and testing in a continuous workflow.

Chris - At this point, the Chemputer not only does the grunt work of a chemist; it does the chemist’s full job. Lee is even looking into teaching it to look through research papers and pick up new techniques, by translating them into its own chemical language.

Lee - That was the vision for our initial paper that it would literally be able to play the literature. Almost like taking vinyl records, digitising them, putting them onto Spotify.

Chris - And if the machine can do the full job of a chemist, that includes trying to synthesise new medicines. Lee already has one working on short biological molecules called peptides.

Lee - Now peptides are a good example because peptides are made by robots already, but our chemputer not only makes peptides but it can do any other type of chemistry on the peptide that you want. And that's getting the biochemists really excited, because we can start to dream up new types of drug molecules that maybe can look at the iron pumping system in the cell, or certain receptors at the membranes in the cell.

THE FUTURE

Phil - We’ve come a long way over the past couple of decades to get to medicines designed by inventive machines. But the technology is here, and wherever there are mountains of data to be had, AI can crunch those numbers. From diagnosing cancer, to predicting someone’s future health - and we haven’t even had time to get into the world of personalising medicine for individual people. It might take a little while, says Mike Wooldridge, but it seems to be on its way.

Mike - I think within the next two decades - probably three decades - I think we will see much wider takeup. Again, I don’t know exactly what's going to be the killer healthcare app on your Apple watch, but that technology just seems so promising it’s hard to believe that it won’t, within that two and a half to three decades, start to make a big impact.

Phil - But it’s important for everyone to remember that AI as it exists now is singleminded, and totally literal, and only ever as good as the data you use to teach it. Beth Singler.

Beth - We need to recognise that these systems aren't objective, aren't purely rational. They will make those mistakes based on the data. So we need to be, kind of, continuously critical, and not get into a scenario where you just sort of trust the computer's answer...

Comments

Add a comment