AI cancer scans, and heatproof drone plans

Plus, ecofriendly supercomputers, and exotic carbon found in deep space
30 June 2023
Presented by Chris Smith


Cancer cells


How an artificial-intelligence technology from Cambridge is helping cut cancer treatment waiting times, how the James Webb Space Telescope is shedding new light on the chemical building blocks of life, the universe and everything, and why Finland’s become a hot spot for the world’s computer scientists.

In this episode

Cancer cells

00:53 - AI tech speeds up cancer treatment

How new AI technology can cut the time needed for radiotherapy treatment to start

AI tech speeds up cancer treatment
Raj Jena, Addenbrooke's Hospital

A new type of artificial-intelligence technology that can cut the time cancer patients must wait before starting radiotherapy has been pioneered by Cambridge University researchers. The technology, which will now be offered at cost price to all NHS trusts in England, has been trained to recognise and highlight the healthy tissues on a patient’s body scan that the X-ray radiotherapy dose needs to avoid hitting. This reduces the time it takes cancer specialists to map out a patient’s radiotherapy regime, dramatically speeding up treatment. Addenbrooke’s Hospital consultant Raj Jena has been leading the work…

Raj - The oncologist is trying to target the cancer within the body. Sadly, for many of the cancers that we're trying to treat, they're located deep in the bodies. The only way for an x-ray beam to get there is to go through some healthy tissue. We could get rid of almost any cancer in the body with x-rays if we could get enough dose in. The problem is you have to go through healthy tissues to get there, and that limits how much you can give. So it's as much of a task to map out the cancer as the target as it is to map out all of the healthy tissues in the path so that we can actually get the beams in a safe fashion.

Chris - What you're saying then is, because we know there are things in the way of where you want to go, you're looking for the safest route to get the maximum x-ray dose into the cancer with the least dose into the healthy tissue, so you minimise the harm. We do

Raj - Exactly right. So the robotics of the system is very, very good at arranging different radiation beams to come in from different directions. But all that's no good if we don't have a map that shows us where we need to avoid, where the no-go zones are.

Chris - And that's what hitherto has been a human very labour intensive task.

Raj - Exactly. So for an oncologist working on this, depending on where the tumour is, it can take anything between 25 minutes to upwards of two hours to mark out all of these healthy tissues. So we were looking to try and accelerate that with the AI.

Chris - How does it work?

Raj - Well, what happens is once the patient is scanned, the data goes off through a safe mechanism so that we can run our machine learning algorithm. And what's returned to the oncologist is a scan, but with extra smarts because instead of just having the image to look at, they get the image plus already we have marked out all of the healthy tissues, and that means that overall the oncologist using this technology can go about two and a half times faster. That's what we see as a sort of real world acceleration by introducing the technology.

Chris - How does the AI do that? How does it know what is healthy tissue in the first place?

Raj - So it has to be trained just as we all have to be trained. So we had to build a dataset in each case of about 150 patients. And for each of those patients, we had experts mark out exactly where the healthy tissues were. And then it took hundreds of hours to train the best model that we could. And then once we did that, we started evaluating its performance. When we got to the point where we were seeing performance that was starting to match human performance, then we knew we were onto something.

Chris - I was going to say, having built the system, did you then give it an exam to do as it were, compare it with you versus it to see if it can do a better job than you can?

Raj - Exactly. So what we did, we gave our oncologists the preparation work. In some cases it was done by the AI. In some cases it was done by their colleagues, and we didn't tell them which was which. And we actually found that in two thirds of the cases, they actually preferred to start with the AI rather than their own consultant colleagues. And that's when we really knew we were onto something.

Chris - So it produces a scan with a lot of the markup done already to guide people in the right direction. So what does the oncologist add? Is it just a safety check, or is it that there's still work to be done by the human here?

Raj - So the system hasn't learned to mark up a tumour. Tumours are much more complex and varied. Our normal tissues follow a very set pattern, so it's quite easy to get them to learn that, but it's an order of magnitude more difficult for them to learn how to segment a tumour. So at the moment, what the oncologist does is that they go straight to the tumour and they devote the lion's share of their time, mocking that out as precisely as possible. And then they have to check everything that the AI does because at the moment that is our sort of, you know, safety check, is that everything that the AI produces can't be accepted into our clinical system until the oncologist approves it and says, 'yeah, this is safe to use.'

Chris - And what sort of a difference is this making?

Raj - Well, we have introduced this and other technologies into our workflow at Addenbrooke's. Where nationally we have a 31 day target between being told we are going to go for a radiotherapy plan and actually starting it. In Addenbrooke's, for the fastest growing tumours, we're aiming for 14 days and actually we're aiming to go even faster than that. We'd like to get it down to five days if possible. And it's these sorts of technologies that help us do that.

Chris - And does this make a difference to the outcome for the patient? I know that it's a bit less time and that's maybe good psychologically that something's being done a bit sooner, but does it make a difference to disease and clinical outcomes?

Raj - Indeed, it does. What we know is that for the very fastest growing tumours that we deal with, you are 2% more likely to control a tumour every day that you can shave off that waiting time. So it really does make a difference. And on top of what you mentioned, just that feeling of sort of, you know, staring down the barrel of a gun when you're waiting and you know that you need to start radiotherapy. And you would think, 'why can't I start now?' And, that must be a terrible feeling too.

An image of a spiral galaxy

06:40 - James Webb finds exotic carbon compound

The James Webb Telescope has found a new carbon compound 1350 light years away

James Webb finds exotic carbon compound
David Whitehouse

A team of international scientists has used the James Webb Space Telescope to detect a new carbon compound in deep space for the first time. It’s the substance “methyl cation”, or CH3+ - and it was traced back to a young star system, catchily called d203-506, 1350 light-years away from Earth. David Whitehouse is a space scientist and author and a regular contributor here on the Naked Scientists.

David - It's a very special compound because interstellar chemists and the people who study the chemistry of very early stars and their environment have wanted to find CH3+ plus for many years. In fact many decades. But it hasn't really been possible until now because where they found it is in a disc around a very young, very small star called a red dwarf, a circumstellar disc which contains lots of the material that's going to form planets eventually. And in the past, the only way you could really tell if there were interesting molecules in a circumstellar disc is to use a radio telescope. You detect various spectrum lines in the radio spectrum. But the thing about CH3 is that you can't see it in the radio. It shines in the infrared region spectrum. And this is ideal for the James Webb Space Telescope. So they looked at this very young red dwarf star, which is 20% the size of the sun, with its circumstellar disc. And they were able to see in the infrared spectrum the exquisite precision that this telescope has. They were able to see the telltale signs of CH3+, ending a decades long search for this outside the solar system

Chris - Is this sort of two things then. One, a proof of concept the James Webb really can deliver and it can see these things in unprecedented detail. But secondly, presumably seeing this and being able to see it with the resolution they now can enable us to put to the test some of our theories about how these sorts of chemicals evolve and develop in a forming system. And that in turn informs how they get where they are and what they turn into. Because they're building blocks of other more complicated molecules, aren't they in more mature systems like ours?

David - That's right. The interesting thing about methyl cation ion is that it's a carbon molecule and carbon is the most friendly molecule in the universe because it teams up with other carbon atoms and other atoms and forms long chains. And it does this far better than any other element in the universe. So if you're going to have the synthesis of very interesting carbon based molecules in these circumstellar discs that perhaps might survive to get onto the surfaces of very young planets and start the process of life, you really have to find CH3+ in the circumstellar discs because it is root of all the interesting chemistry that you could have there. So this was needed to be there because we've detected other molecules in circumstellar discs and indeed in gas clouds and stellar nurseries in space, which are, if you like, based on CH3+ having connected up with other carbon molecules. But we've never detected the root, if you like, of this process. So it shows us that we really do understand how important, interesting, and complex carbon-based molecules can actually be formed around a young star. Now whether or not those molecules survive when the star perhaps gets brighter or survive when the circumstellar disc turns into planets. And then seeds these planets with very interesting chemicals that can perhaps go on further to develop life is an interesting scenario. But finding this, since we knew it must be there, with the power of this telescope, is tremendous. So much so, in fact, this discovery that it's been put out by the journal Nature without being fully peer reviewed. It's a solid paper, but they were so excited by it they put it out as fast as they could.

Chris - Well I was looking at some of the other things that James Webb is delivering this week and it's had a pretty good week, hasn't it? We've got black holes featured and also these interesting things about the cosmic web. Tell us about those other findings this week.

David - It's a year on since it really started observing and there's going to be a conference soon 'one year with the James Webb' and we are going to see wonders from this, but you're quite right. Two things that they've mentioned this week is that they found because of its ability to look very far into deep space and the early universe, the further away you look, the earlier it is in the universe's history, they found evidence for black holes in the centres of some galaxies, less than a billion years old, these galaxies. But the black holes were between 600 million and 2 billion times bigger than our sun, the mass of our sun. Now that is a real puzzle to cosmologists because they did not think in a billion years that black holes could grow so big so fast. And these are at the centers of galaxies and they will be powering those galaxies as they eat material and young stars and throw off radiation into the environment. But it's a puzzle as to how quick they got started and how rapidly they grew. And also looking even further back into the beginnings of the universe, back to 380 million years after the Big Bang, there's this wonderful picture of 10 galaxies in the line, so to speak. And they think that is the first real good observational evidence for, if you like, what they call the spider's web of structure around which galaxy clusters formed. Because if you look in the universe today, the galaxies aren't spread evenly around. They form clusters with big empty voids, voids between them. And that's the start of this structure. And you know, that's three amazing fundamental discoveries announced by James Webb in just one week. It's a wonderful instrument.

A drone shot of Helsinki

13:27 - Making supercomputing more carbon friendly

The hidden carbon costs behind our most powerful computers

Making supercomputing more carbon friendly
Loïc Lannelongue, Jesus College Cambridge & Kimmo Koski, Finland's IT Center for Science

Revolutionary as it is, supercomputing comes at a cost: the electricity bills to power the massive computer stacks that are playing increasingly significant roles in research are eye-watering; especially now. Spends in the tens of millions are not unusual. This means the computing carbon footprint is also prodigious, but usually invisible to the scientists who use supercomputers in their work - ironically often to study things like climate change! But awareness is improving, and the industry is also taking steps to shift computing loads to countries and centres with the best carbon credentials and renewable energy provisions. Finland’s one of them, as we’ll hear shortly. James Tytko has been investigating…

James - A lot of the most cutting edge scientific research would not be possible without large scale computations. From algorithms able to analyse biopsy images for faster cancer diagnosis, to modelling the weather in space to keep our telecommunications systems online, they help us solve problems unfathomable to science of the not too distant past. But it all comes with a cost: training large language models like ChatGPT, for example, requires a computational grunt which results in the equivalent of 100s of tonnes worth of co2 emissions. Since illuminating this hidden cost of computing to conscious researchers like himself with a nifty online calculator, Loic Lannelongue has been raising the alarm for what he sees as the much overlooked side-effects of conducting computational science. I caught up with him at Jesus College, Cambridge, and started by asking him to outline the key causes of computing’s carbon footprint…

Loïc - The main one people think about is energy usage; all the electricity you need to provide to the data centre to power the computers during the task. How much electricity you need depends on what type of computer you have and the task you're doing on the computer. The carbon footprint of producing electricity most of the time depends on where you are in the world. If your country has an energy mix that's mostly relying on renewable energy or low carbon energies like nuclear, then the carbon footprint of using electricity is really low as opposed to using gas or coal.

James - Part of the problem here is that perhaps a lot of researchers aren't fully aware of the problem as you've outlined it.

Loïc - Yes. That's a really good point. And researchers are really starting to be aware of this topic and are really picking up on it and are really interested. And for a long time what was missing was tools to do it, and now more and more tools are available to estimate these carbon footprints. So we are starting to move in the right direction. But because the carbon footprint is so remote, it's in a data centre you don't see, you're just using a laptop so it doesn't feel like you're using a war machine in another part of the world. It's easy to just ignore the problem.

James - And so plugging this information gap has been what you've focused your time on recently.

Loïc - Yeah, so it's all part of this, what we ended up calling this green algorithms initiative. So it started as a calculator online, and then we realised, okay, it's great now any scientist can go and plug in some numbers. The carbon footprint of computing depends in part on where you are in the world and what's the electricity source. Now, let's say you are in Australia and the carbon intensity in Australia is really high because everything is powered by coal. It's not your fault you're based in Australia, and you can't do much about it. You can make your models a lot more efficient, but at the end of the day, it's limited by the fact that all your electricity comes from coal. And that's where you can have collaborations between institutions to say, okay, it's so easy to move computing across. You can have your data centre halfway across the world in a country that has very low carbon footprint and relying on lot on renewable energy.

Flight Attendant - Hello everyone. Before departure, we kindly ask for your attention as we go through the safety features of this aircraft...

James - To see one such collaboration between science and sustainable computing. I traveled to Helsinki to speak to the organisation behind Europe's most powerful supercomputer, which claims to have sustainability in mind from start... to finish.

Kimmo - We have a joint effort with nine other countries. This supercomputer is called LUMI. LUMI is the Finnish word for snow. That's the name of the supercomputer that's number three in the world at the moment, and number one in Europe.

James - Kimmo Koski - managing director of Finland's IT Center for Science, a nonprofit state enterprise owned by the Finnish government and higher education institutions in the nation. I asked him what makes Finland the ideal location for this sort of industry.

Kimmo - First of all, we have a lot of carbon free electricity, robust infrastructure, good electricity systems, networks, etc. So, the environment is ideal. And then it's very cost efficient too because the electricity price in Finland, it's much, much less, especially for the renewable ones because of taxation than in many places in central Europe.

James - Who are the main beneficiaries of LUMI's computational power?

Kimmo - LUMI is meant for a quite wide audience. For example, from the medical side, artificial intelligence, climate change. I will claim that it's one of the most eco efficient computers in the world because, first of all, the electricity that's used in LUMI is fully renewable. In this case, hydro. One fifth of the flats in that area where LUMI is located is an old paper mill in the city called Kajaani. One fifth of the flats are being warmed with the waste heat of LUMI. So, we are contributing also this way to a very, very, very small carbon footprint.

James - So some positive developments in Finland. I asked Loïc whether LUMI sets a good precedent for the greener data centers of the future.

Loïc - Something that I think is really important is if we start to talk about inter institutional collaborations, we really need to keep that equitable. A lot of countries rely on high carbon power sources and we don't want to unfairly penalise researchers there saying, oh, you can't do computing in your institutions anymore, and we don't want to be holding all compute and all data which is what could be tempting to do. Say, oh, that's okay, we're going to do the compute for you. It's not just saying, 'oh, let's just use all the Nordic countries, which rely a lot on renewable energies'.

James - It's obviously a positive thing that LUMI uses a hundred percent hydroelectric power, a sustainable energy source. But what people have to weigh up in those circumstances is that it could be robbing Peter to pay Paul, it could be being used at another part of the country's infrastructure, which now has to rely on non-renewable energy sources.

Loïc - Yes, that's the challenge and that's the difference between building renewable energy facilities directly, so in practice you are adding to the energy grid, or having a green energy contract. It doesn't mean they add more renewable energy to the network. It just means part of the renewable energy is dedicated to you, but at the end of the day, it's still the same amount of renewable energy in the total network. So it doesn't really reduce the total carbon footprint of the country. Until the entire country or the entire place is powered by renewable energy, the data center is probably not zero carbon because there's still gas being produced for other sources.


20:58 - Heat-resistant drone locates fire survivors

FireDrone prototype could assist in rescue efforts in burning buildings and during wildfires.

Heat-resistant drone locates fire survivors
Mirko Kovac, Imperial College London

Scientists at Imperial College London and Empa in Switzerland say they have developed a heat-resistant drone that could be used to scope out and map burning buildings and wildfires. It’s hoped that the FireDrone prototype will provide crucial first-hand data from danger zones. I’ve been speaking to Mirko Kovac from the Aerial Robotics Lab at Imperial College London and the Laboratory of Sustainability Robotics at Empa…

Mirko - In 2017, there was this terrible incident, the Grenfell Tower fire that killed at least 72 people. I used to live close to this tower and used to drive by in the morning, and I was really shocked by this. At the same time, people used to ask me whether I have a drone that can fly inside and kind of look for survivors and so on. And this came out of the Imperial College lab that I'm running on aero robotics. And at that time I didn't have that. And really the technology was not there to have something that can fly inside of hot environments and take relevant data.

Chris - At the moment, I suppose what we have to do is send in firemen.

Mirko -
Yes. The problem is really that the firemen are human beings and they really shouldn't go into areas that are very toxic, very hot, very dangerous. So the fire drone is a tool that can get the information more quickly to them. And like this helps in rescue efforts, the basic idea is to have a drone that can fly in hot environments and also in cold environments. So we call it a temperature agnostic drone that can survive for a longer time and collect data, pictures, thermal images of those environments to then inform the rescue efforts.

Chris - Presumably there are quite a few problems you have to solve in realising that goal because you've got a navigational problem, you've got a visual problem, seeing where you're going, a control problem, getting signals through, and then obviously getting the thing to resist a very harsh environment. So how have you solved all that?

Mirko - Yes, it's really a problem that is very multidisciplinary going from materials to control systems to system designs and operations autonomy. And in 2019, I set up a collaboration between Imperial College in London and the Empa Material Science Institute in Switzerland. Together we have developed this drone that uses a specially developed aerogel material that allows the drone to fly in very hot environments for prolonged periods of time.

Chris - How hot can it withstand?

Mirko - It is designed to operate at 200 degrees centigrade for 10 minutes. But this is really difficult to actually define as a number or duration because it's about flying close to fire in hot burning environments for several minutes to collect the data compared to current drones, which can really survive only a few seconds in those types of environments.

Chris - One of the big challenges in aviation is weight. So how do you get around the constraint that you're adding something to a craft where weight really matters?

Mirko - Yes, this was a major effort of this work and in fact, aerogel is a very good material for that. Aerogel, in case you're not familiar with this, is a material that consists up to 99.98% of air. So it's a material that has air bubbles or pores inside of it. And because this can be an extremely strong thermal insulator, it is used in space applications, in facade insulation and so on. And we are now here developed a particular aerogel that has glass fibers embedded in it, which allows it to be a structural material that can be used for the robot body while at the same time providing the insulation. We also have integrated other methods of heat, insulation and management, including a face transition mechanism with a compressed gas capsule and an arrangement of the motors that is accordingly to optimise the overall design.

Chris - What does the drone look like that you've built and how big is it?

Mirko - It's basically a flying sphere that has four arms with propellers at the end, the motors are inside of the sphere so that they're heat protected and the choice of going for a sphere shape is also inspired from animals that have this spherical arrangement to have a better heat protection. So like this, you can imagine it has several cameras and can fly safely inside of different environments and take pictures, map the environment and provide the data very quickly.

Chris - The idea being then that you would deploy this into a burning building and.. What? Would it be autonomous, can it get itself out of trouble or have you always got to have somebody on the end of a controller watching where it's going, guiding it, and then hopefully getting data back in real time?

Mirko - The concept is something we call shared autonomy, where the vehicle itself has some level of autonomy such as being able to stabilise even in gusty environments or fly a certain trajectory or fly close to walls while keeping a certain distance. While at the same time there is a mission level control and operation from a human operator who might have, let's say a headset, virtual reality interface to then be able to have these remote presence in those type of hazardous and hot environments. So it's really sharing the autonomy on the vehicle with the operational control and mission planning of the human.

Chris - So can you take us through a scenario as though this were being deployed into a situation? What would it do and how would it help?

Mirko - So imagine there's a fire in a building and the firefighters would want to see whether there's a survivor or somebody trapped inside of the building. So instead of trying to go inside physically by themself, which is very dangerous, and also not just because of the fire, but also because of the collapse of the structures. Instead of that, they could deploy a handheld sized flying robot into the building that would fly autonomously and map the internals, get temperature signatures of the environment to identify heat sources and also see whether there are any survivors or people inside of the building. Such an operation could take a couple of minutes compared to a longer duration of time, which would be needed for a human to go inside safely.

Lightning during a thunder / electrical storm

27:18 - How do we know when lightning hits the Earth?

How can weather services know when lightning strikes occur?

How do we know when lightning hits the Earth?

We got in contact with the Met Office’s lightning scientist, Dr Graeme Marlton.

Lightning is an electrical discharge through our atmosphere. It produces a very strong electrical current of the order of thousands of amps, and that heats up the air around it, which causes an audible sound wave to propagate outwards, which we hear as thunder. And finally, because there's a large electrical current flowing downwards that actually generates a very broadband radio transmission, and it can be detected at various parts of the radio spectrum. So one of them is the VLF between 3 and 30kHz. And one of the great things about VLF is it can propagate thousands of kilometers without becoming attenuated. So at the Met Office, we operate a lightning location system that uses this property. So a lightning strike occurs somewhere over the UK and a VLF radio wave will emanate out from the lightning strike. And we operate 10 to 11 receivers across Europe, and it'll be detected at each receiver. So we know the relative time between each receiver. And from that, we can geolocate the lightning strike. And so in practice, our lightning location system, LEELA will detect many, many lightning strikes each second, and it's able to geolocate each one and position it.


Add a comment