AI passes Turing Test, and new drug for Covid
In the news, the old liver drug that turns out to be able to prevent Covid-19 infection, the artificial intelligence systems that pass the Turing test and can write their own computer programmes, and what bats and heavy metal singers have in common.
In this episode
00:57 - Old drug boosts Covid resistance
Old drug boosts Covid resistance
Fotios Sampaziotis, University of Cambridge
We kick off this week with a topic that, thankfully, we've not had to talk about for a while, and that's Covid-19; and there's some good news, because researchers from Cambridge University have discovered that a very cheap, very safe drug, called ursodeoxycholic acid, that's already in routine use in the clinic for treating some forms of liver disease, temporarily turns off the gene for the chemical called ACE2 that the Covid-19 virus uses as a "doorway" to infect us. Taking the drug reduces the levels of this protein, which appears to make a person much harder to infect, and potentially retards the ability of the virus to cause disease. Fotios Sampaziotis is one of the team that spotted this could happen…
Fotios - It was a serendipitous finding. We were working on a mechanism to make the liver repair itself following damage. We were looking at a couple of drugs and we're trying to find what the effect of certain drugs was on liver cells: which genes get turned on and which genes get turned off. What we found every time was that ACE2, which is the formal name for the doorway that allows SARS CoV 2 (the Covid virus) to get into the cells, was turned off with these drugs each time. But we didn't quite know what to make of it and we kept ignoring it and moving onto the next one and the next one, which made more sense.
Chris - And you ignored it just because this is before the pandemic happened and it wasn't relevant to your research. So you just thought, "oh, I'll put that to one side. Not relevant."
Fotios - Absolutely. I'm not sure what this protein does, but I'm sure the time will come when we will find out, but we won't focus on that for now. And, true to that, the time did come. As soon as the pandemic hit, everybody started talking about this protein, which is the doorway to the virus and so important for the virus to get into the cell. And we thought, "hang on, we can modify this protein, switch it on or switch it off, and take it away. And if we take away the door, the virus will never be able to get into the cells."
Chris - It did emerge pretty quickly during the pandemic that ACE2 was this marker on the cells in the nose, in the throat, in the lungs, that the virus grabs hold of to get into cells and infect in the first place. So I suppose the light bulb must have gone off in your head. "We've got some drugs that we already know turn this thing off and just make it go away."
Fotios - Exactly. That's exactly what happened. And we thought "let's test it and see if it works on liver cells." It worked brilliantly. We put the virus on the cells and saw that we could absolutely block viral entry.
Chris - In the liver?
Fotios - We started in the liver. But then the next natural question is: it's not a liver disease, is it? It's a lung disease. So, let's go and test it on mini lungs, which we call organoids, in a dish. We produced some lungs in a dish, we infected them with the virus, we gave the drug, and we saw that we could block them with a drug. So we're quite happy now in the lab that it seems to be working on all the relevant cell types. And the biggest question then is, can we move from cells to a whole organism? So that's when we decided to test it in animals, in hamsters.
Chris - Why hamsters? Are they a good model for what would happen in a person?
Fotios - Usually, most people who do research work with mice, but mice do not get Covid. So the hamster became the model to use and it worked brilliantly. But of course it wasn't human. So the next step was to move even further and go into humans and human organs.
Chris - As in give people the drug and see if it turns off their expression of this ACE2 doorway for Covid?
Fotios - We wanted to see if it turns off the expression of the doorway, and if shutting down the door blocks the virus from getting in. We gave the drug to people, eight doctors in Germany. We measured the levels of the doorway before they took the drug for five days. We measured what happened to the doorway and we saw a significant reduction, but we didn't know whether or not that translated to the virus not getting in. So we took human lungs which were offered for transplantation, but they could not be used. We put them on a machine that essentially keeps them alive for several hours outside the body. In one lung we gave the drug, the other lung didn't get the drug, but we gave Covid to both the lungs. And what we saw is that lung that was receiving the drug got infected far, far less. And that was the closest we could get to actually giving the infection to a human organ.
Chris - Is another way to probe this - because you use this drug in the clinic - to say, "well let me look at my patients that I've got on this, an equivalent or similar patients that I haven't got on this, and ask, 'is there an excess of Covid in the people who are not on this drug?'" Because that's another way of asking that question in real human beings.
Fotios - That's spot on. So this is the final step of what we did because we did it in the clinic all the time. We asked exactly this question and the answer was yes. But of course you have to be very careful with these results because the patients who are taking the tablet versus the patients who are not taking the tablet can differ in many, many parameters. And some of these factors could affect the virus. Most likely it was a tablet that blocked them from getting the virus, but we cannot fully exclude that they were shielding more or that disease somehow rendered them less susceptible to Covid. This is why we need a clinical trial, which I think is where we're going next.
06:34 - Bats growl like heavy metal frontmen
Bats growl like heavy metal frontmen
Coen Elemans, University of South Denmark
Listeners of a certain vintage may recall the shocking moment on January 20th, 1982 when heavy metal singer Ozzy Osbourne bit the head off a bat live on stage. But a new study has shown that bats and singers might have more in common than just that unfortunate incident. For the first time, scientists have been able to study the workings of the bat voicebox, or larynx, to watch how the vocal cords, as well as sets of additional vocal cords - called “false vocal folds” - vibrate to produce sounds, both in bats, and in us: it’s how Tuvan throat singers make those haunting melodies. Will Tingle spoke with the University of Southern Denmark’s Coen Elemans…
Coen - I'm generally interested in how animals produce sounds and so we've studied this over the last decade or so in lots of different animals and we typically see that most animals use something similar as we do. So they have vocal folds in their larynx or birds in the syrinx and they oscillate these tissues basically and that produces sound. And in bats there was a long standing debate how they actually make these sound. The idea was that they make it by very thin membranes that sit on their vocal folds, but we weren't quite sure because it's very hard to film this. So if you want to have direct evidence, you need to film this in a tiny bats tiny larynx. And the oscillations they make go over a hundred thousand times a second. So you need very fast filming and you need then lots and lots of light. So this is very difficult. You can't ask a bat like 'open your mouth, echolocate, and we'll stick a camera in that's <laugh> very large and that needs lots of light'. So for a long time people haven't been able to solve this. And what we did now is we used a very different approach. So we studied basically the isolated larynx and then we can very nicely film everything. And that was the incentive of the study to figure out how bats make sound.
Will - And what did your study do to try and achieve this?
Coen - What we did is we studied the isolated larynx and that allows us to go to very high frame rates. So we filmed these vocal fold oscillations at almost a quarter million frames per second. So that's a lot. And then we could very nicely see how these vocal membranes that sit on the top of the vocal folds oscillate and produce the echolocation.
Coen - When we did that, we also saw that slightly above the vocal folds is another set of vocal folds that we also have that's called the false focal folds. And they're called false vocal folds because in humans they've never found really any function when they're originally described them. They're not used in normal speech, they're not used in normal song, but we found in bats that they're oscillating very nicely, very easily. And they did so at a frequency that corresponds to their social calls. So when they are for example, annoyed with each other or when they return from a flight into a colony, they use these calls to welcome each other.
Coen - But we saw that the false vocal folds do this. And in humans, the only recorded evidence for common use of these false vocal folds is in death metal grunting where we have some evidence, not a lot either. And then for example, Tuvan throat singers also seem to to lower their false vocal folds close to their normal focal folds and then they oscillate together and that gives them a very low frequency, but it also gives them a very rough sound because they typically, the oscillations become very irregular.
Will - Presumably then we didn't evolve these false vocal chords for purely the process of death metal singing. So it's perhaps something to be considered as a vestigial limb almost, maybe had a common ancestor.
Coen - Yeah, so very little is known about these false vocal folds and their function in sound production. So yeah, we don't know actually that's a good idea. I like it.
Will - And what did your study find?
Coen - So what we found is that the vocal membranes oscillate these very high frequencies and generates the echolocation calls. The echolocation is what makes bats so special. They evolved flight and they evolved echolocation so they can hunt at night very fast moving prey. And then we found that the false vocal folds oscillate at these lower frequencies and together they give the bats an enormous vocal range So they can produce very low growls and that are around one kilohertz. That's very low for them because it's a small animal and they can go all the way up to 120 kilohertz with their vocal membranes. And that gives you a vocal range of about seven octaves, which is absolutely enormous considering that normal vertebrates and normal humans have an octave range of about two or three, really good singers with a large vocal range four. And like the absolute top singers like Mariah Carey and Prince and David Bowie, they can go up to almost five. But that's exceptional and this is a normal bat.
Will - Well, so we should be looking out for our next hot, new single will be from a bat then <laugh> Christmas number one.
Coen - Yeah, exactly.
12:09 - Coding and chatbot AI trump Turing test
Coding and chatbot AI trump Turing test
Michael Wooldridge, University of Oxford
It turns out that it's been an exciting week for the field of AI - artificial intelligence; on one front, the team at Google's subsidiary "Deep Mind" have published a paper demonstrating a system that can write its own code. In essence, this is a step towards computers that can programme themselves and it looks very impressive: you set it a programming challenge using plain English - for instance to develop a segment of code that can manipulate a series of numbers of letters in a certain way - and it does a better job than half of human programmers that you ask to solve the same challenge. I spoke with Oxford University's Mike Wooldridge, who's a computer scientist and author who publishes on AI; we began though by discussing another AI breakthrough that's been making waves this week which is ChatGPT, an artificial intelligence chatbot. As far as Mike's concerned, this passes the revered "Turing Test", named after computer scientist Alan Turing, which is a measure of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human...
Mike - There's been a huge buzz on Twitter about the latest AI development, and it's a new system from a company called OpenAI. Slightly ironically named; they're not 'open' at all. They're funded by Microsoft and they're a for-profit company. But they've released a series of tools over the last couple of years which do tasks connected with what we call natural language, which just means ordinary language that people use, languages like English, the language that we're using now. Tools that can communicate in natural language have long been a goal of artificial intelligence and they've been very difficult. This is what Alan Turing was talking about in the 1950s when he introduced the Turing test. Well, the bottom line is, the Turing test is now passed. These new tools can generate text which is easily as good as a very good speaker or writer of English, but it turns out they can also do some other impressive tasks as well. One of the standard things I do when I demonstrate these tools is go to the BBC website, copy a news story, then ask GPT to summarise it and it will produce a startlingly good summary. And I can then say, "what are the three key bullet points about this story?" And it will again do a remarkably good job of producing the three key bullet points. You can then ask it to translate it into any number of other languages, and it will again do an equally good job of that.
Chris - When I talk to researchers that are increasingly employing AI in their practice, I ask them, "can you explain how it's helping to do what it's doing? Is it an explainable AI? Can it tell you how it's doing it?" And they just look at me and they say, "no."
Mike - No. And these systems are enormous. It turns out that to make it work, you need massive what are called neural networks. You need AI supercomputers running for months in order to build these systems. And what comes out is capable of some seemingly remarkable feats. But do we understand how it's doing it exactly? No, we really don't. There are some caveats about this and one of them is when to trust it. They can be very plausible in what they tell you, but sometimes they can be plausible but completely wrong. And if you are gullible, this can set you off on a very bad track indeed. So no, we don't understand exactly how they're doing what they're doing, and this is one of the big challenges with the technology at the moment.
Chris - Well, that was going to be my follow up point, which is that if we don't know how it works, how do we make sure that what it's generating is reliable and is authentic and it's not got some kind of glitch which means it then summarises that BBC news article but accidentally injects the wrong interpretation. And when someone reads that top level summary, they're given totally the wrong impression. It's like a newspaper printing a misleading headline.
Mike - God forbid that newspapers would ever print misleading headlines. That's absolutely one of the challenges. There's a huge amount of work right now to try to understand exactly where they can be trusted and where they can't in the short term. Where they're most likely to be used is in low risk scenarios, you know, where these are not life or death situations, where it's not somebody's job that hangs in the balance on the output of these things. But there's a lot of work yet to be done on exactly understanding where they are reliable and where they aren't.
Chris - Can I point you at a paper that's come out this week, it's in the journal Science and it's presumably founded on the same sort of technology or the same system where what they're doing is now saying, "we've got a system that can write computer code and it does it on average just slightly better than a human can." But in the same way that we would give a human instructions, I want to write a computer programme that does X, Y, and Z. It seems to be able to take those human instructions and turn it into reasonable computer code solving the problem most of the time. It seems pretty impressive to me that you can do this.
Mike - You are right. The technology is exactly the same. The way that they work - they do what's called a glorified auto complete feature. What it's looked at is all the computer programmes that are available on the worldwide web and there are a huge number of those. And what this particular system does is, you type what you want the programme to do, it'll generate a large number of possible candidate programmes, and then it'll whittle those down by running some tests to see which ones look like they're producing the likeliest answers. It's very neat. I think it's a lovely result. I absolutely would not trust a computer programme that came out of that process, at least not in any mission critical or life critical situation.
Chris - The difference of course in the two situations is that if it generates computer code, because you give it instructions, "this is what I want to achieve", it will come up with some code for you. You can then interrogate the code it's come up with and you can see how it's working. So there is a possibility of making it explainable in terms of its output when it's generating a tractable thing like computer code we can understand, isn't it?
Mike - Yeah. Although computer programmes are notoriously difficult to understand. So there's going to be a trade off between the extent to which it's worth you just writing it yourself and you know that you understand it versus the amount of time that you've got to take to convince yourself that it works. But I say what you absolutely shouldn't do is just trust it out of the box. That would be extremely naive and I think that's one of the big worries about this application. That doesn't mean this isn't a very neat result from DeepMind. I think it is. They've demonstrated that they can get, in programming competitions where people are asked to produce programmes to a certain specification in a certain amount of time, they've got very creditable performance on that task. So that's a nice result. But computer programmers I think can sleep easy in their beds. I don't think they're about to be replaced.
20:37 - Parrots prefer particular phrases
Parrots prefer particular phrases
Lauryn Benedict, University of Colorado & Christine Dahlin, University of Pittsburgh
We have known for some time that parrots and cockatoos are excellent mimics, indeed that is where the phrase ‘parroting’ someone comes from. But the extent of each individual parrot species’ ability to mimic, and the reasons as to why they latch on to certain human words, is a subject that hasn’t been given much attention. That is until Lauryn Benedict, of the University of Colorado, and Christine Dahlin, of the University of Pittsburgh, used their time during COVID to create a questionnaire for zoos and parrot owners, asking them about their bird’s speaking habits. This questionnaire was, appropriately enough, tweeted out and gave them a sample of over 900 individuals from 73 species. They spoke to Will Tingle about why parrots like to mimic humans, and how it may provide insight into the evolution of language.
Christine - One interesting finding was that we found that a lot of the parrots are not only using mimicry extensively, they also tend to mimic sounds that are most socially appropriate. So in the wild we have hypothesised that this vocal mimicry ability allows birds to integrate into their social groups that potentially their learning sounds that are appropriate for whatever block they're currently part of. And that most of the birds that we looked at were mimicking the words and the phrases being used by the families that they're part of. So they don't tend to be mimicking as much, sounds like dogs barking or doorbells ringing or phones, things like that. It's more like, 'hi, how are you?' 'Goodbye' - the commonly used phrases in their household.
Will - So Lauryn, are you saying then that their imitation of us is them trying to integrate themselves into our social circles?
Lauryn - Yes, very much so. And that's the way that parrot flocks work in the wild is that they're large and there is, in many species, some fission and fusion of those flocks. So parrots might join new social groups over time. And the current thinking in the field is that the reason they can continue to learn vocalisations throughout their lifetime is so that if they move social groups they can begin to learn the local dialects and fit in and integrate themselves with that social group a little bit better. And interestingly, we did at one point look through some of the voluntary responses telling us the words that birds used most often. And a lot of those words were words you would expect, like 'good bird', 'step up', things like that. But there were also a few swear words in there, <laugh> and at least a few parrots have learned to say 'Alexa' and various other things that are probably pretty interesting because they hear their humans say them all the time.
Will - That's great and dystopian in equal measure. Now humans as our vocabulary expands, we can theoretically learn words from now until our very last day. But is this the case with parrots? Does their vocabulary expand as they get older?
Christine - The one species which we had a sufficient sample size to examine that question was the African grey. For the African grey. It seems that they are able to expand their vocabulary up to about five years of age and then after that it pretty much levels off. So potentially they are replacing words in their vocabulary or sounds that they're no longer using as they gather new ones. And we suspect that this might be useful if birds in the wild are doing something similar. Maybe if they have moved to a new social group they can replace no longer used words, or part of their call repertoire, with a new one.
Lauryn - But they certainly can learn later in life. So a 50 year old parrot absolutely can learn new vocalisations. They just don't seem to keep expanding their repertoire. So maybe when they learn that new vocalisation, they drop something else out of their repertoire.
Will - This may be a bit of a sideways leap or a bit of a stretch, but do you think that this could show us perhaps how language conform, obviously with humans will never truly know and there's only one species and it was a long time ago, but perhaps human speech and vocabulary started out as a series of just imitations. What scope is there do you think, for this study to open up into perhaps the development of language in another species?
Christine - I feel like there's lots of elements to human language that we can see in parrots and indeed in other animals. Like one aspect is referentiality, that's the ability of sounds to refer to specific elements or reference in the environment. I mean, even in our study we can see context specific usage of the bird's calls and there's lots of animals that will refer to different predators with different calls. So that's an aspect of human language. One aspect of parrot language that I've studied in wild parrots, in yellow-naped amazons, is syntax. So syntax are specific rules that you use to govern how you put long strings of vocalisations together. Yellow-naped amazons give complex duets in which males and females precisely coordinate their calls in certain ways. They put certain types of calls at the beginning, they time their calls in specific ways. And so these rules are really important for the structure of these calls.
Lauryn - And there are some pieces of human language that we don't see in parrots. Things like the ability to rearrange words almost infinitely to make new meanings all the time. We certainly haven't demonstrated anything like that in parrots and we don't know that they always have an understanding of the meaning of the words that they say when they say them in context. That would be a future thing that I think researchers, you know, should try to look into. And of course there's great work with like Alex, the African grey, on questions to do with cognition and really understanding of what they're saying. But certainly that ability to make really clear mimicry of particular sounds so that groups can have shared sounds and the ability to use them in the right context is sort of a really interesting stepping stone to things like potentially more complex language like communication.