Experts call for AI moratorium

Concerns have been raised due to the rapid development of potentially dangerous technologies...
11 April 2023

Interview with 

Henry Shevlin, University of Cambridge

BRAIN

A brain surrounded by vivid colours.

Share

Recently more than 1,800 technology leaders and researchers, including technology entrepreneur Elon Musk, penned an open letter urging artificial intelligence labs to pause development of their most advanced systems, cautioning that A.I. tools present “profound risks to society and humanity.” Chris Smith wanted to know what those in the field believe those risks are, so he asked Henry Shevlin (from Cambridge University’s Centre for the Future of Intelligence) why he signed the letter, and what he most fears…

Henry - Realistically, I think it's going to be very hard to slow down AI research for six months, let alone stop it. But I signed it because I thought it was a good idea to draw greater public attention to the fact we are playing with fire here. I think AI is an amazing technology with real potential for good. People compare it to things like the invention of electricity or the internet, or even the microchip. And I think those comparisons are apt, but at the same time, there's massive potential for misuse and harm, and I think we just need to get a good handle on what's happening and not rush blindly ahead.

Chris - When you say there's risk of misuse and harm, give us some examples of the kinds of things that this might unlock that we would rather it didn't.

Henry - So you get a whole panoply of risks. One that's already here that I tried out myself is voice cloning is easier than ever before thanks to AI. I can take a two minute clip of your voice, Chris, run it through an AI system. I won't say the website, but there are plenty of them out there. And then produce a voice that's basically indistinguishable from yours. So imagine if I then call up your producer and say, Oh hi, I've forgotten the code to the building. Could you let me in, for example. That's an easy way to conduct hacking attempts or even physical, physical break-ins. And so there's a whole bunch of malware cases of misuse cases associated with that. Then in the medium term, you can think about the use of things like autonomous weapons that we could potentially lose control over in war zones like Ukraine. And then the really long range stuff that I think a lot of people who signed the letter are worried about are the idea that we might lose control of the whole thing. Now, I think that's quite speculative, but I also think it's, it's not a crazy thing to worry about. If as soon as you start building systems that are maybe smarter than us, more powerful than us, you do have to make sure that we're still the ones in the driving seat.

Chris - People talk about 2050 as being the turning point where we might reach a threshold point where we have got things that are brighter than us. But based on what you are saying, we're almost there. 2050 seems a bit of an unambitious aim.

Henry - Yeah, well, if you look at these prediction markets where sort of experts can make different turns at sort of predicting how far these things are going to be away, how long until we get superhuman intelligence in AI. And if you look at sort of the date, the predicted date for artificial general intelligence, it's sometimes called AGI. It's been shifting steadily backwards from 2050 and I think the average answer is now 2031. Now, of course, as the old saying goes, it's hard to make predictions especially about the future. I think if you'd asked people 10 years ago whether we'd be able to create video and images perfectly from just typing a few words on a screen I think people would've thought you were loony. And yet technology has moved really fast, but at the same time you sometimes run into unexpected obstacles. We might find that we start running out of data to train these models on. That's something people have been worried about. We might start to run into some difficulties in building bigger and bigger computers. But the one thing I would emphasize is that this is not an end of the world scenario. <laugh> 2031 is the point where we think that hopefully we'll have systems that can do better science than us, can design cities better, can build better cars. I mean, there's a lot to be optimistic about there. We just gotta make sure we've got our bases covered and these systems don't run amok or do things we don't want them to.

Chris - Given the way we train these things by taking pretty much all the knowledge that we have at the moment on the internet and let the machine ingest it and basically see connections between all these different bits of data. But it's basically passively absorbing what we've already done, human endeavor. But what it's not able to do, therefore is know what we don't know. And arguably that's what we want it to know. So is it actually able to think outside the box or is it constrained by existing ideas and concepts that we fed it? And therefore it still leaves quite a lot of room for that 'je ne sais quoi' about the human brain, the creativity that we bring to the party.

Henry - Yeah, I think that's a brilliant question and beautifully explained as well. It's tricky. I think one thing I would note is that sometimes even the information you already have contains things you haven't realized. So think about the way astronomy worked in the enlightenment era, the 17th century, discoveries of people like Copernicus. We had really accurate measurements, measurements of all the stars and planets, but it took a great mind to figure out how they all started together and work out that the sun was the center of the solar system rather than the Earth. So sometimes even in the data you already have, you can spot connections or new theories. Another classic example is think about Sherlock Holmes. You know, sometimes it's not the fact that he sees things that other people don't, it's that he can spot the connections between them. So even if this system is trained just on information we already have at our disposal, it's possible that AI will be able to spot connections or ways of tying together data that we haven't before. Now creativity's a tricky issue. It's something I've written about in my own research and I've gotta admit I'm very enamored of the AI image models. I've had great fun playing around with them, generating cool artworks. I've got a couple of pieces on my wall. Are they creative? Well, I think they can certainly be beautiful. I think they can be interesting. Whether we decide to call them creative is almost a societal decision I think.

Chris - With the call that's in your letter to hold fire on AI for a bit, what are you hoping that we should do in that six months or so that you are saying, 'look, let's just pause this for a minute'. What do you think needs to happen?

Henry - I'll be honest, I always thought that the six month thing was unrealistic. I mean, it wouldn't be a bad thing if we did have the six month moratorium. But what I'm really hoping for is that this generates a broader public debate because right now these systems are largely being designed and built by tech companies and it's actually, even as an academic, I find it really hard to get policy makers to care about it. Some of your listeners may have seen shows like movies like 'don't look up' where the scientists are desperately trying to get the politicians to pay attention to the problem. And it can be really hard to get policymakers' attention to say, 'look, we need a better public debate about this'. We probably need more transparency, we need more regulation. So one of the things I was hoping for when I signed the letter is that this would just bump AI up the public and media agenda. And I think it's working, to be honest, and the amount of media coverage of AI, it's massively exploded since the letter came out. And I think a lot of the conversation is being carried out at quite sophisticated high level and hopefully politicians are starting to pay attention.

Chris - Well, let me put you on the spot then. Where do you think the biggest breakthroughs in this area are going to come? Where is the average person going to see this impacting on their life, in your view, in the near future?

Henry - Well, I'm going to make a bold prediction here that I think is not necessarily one that all my colleagues would share, but I'm pretty confident in. I think we're going to see an increasing involvement of AI in our social lives, in the form of AI friends, maybe even AI girlfriends or boyfriends. And I wouldn't be surprised at all if a couple of years from now, if you look at the top 50 influencers on TikTok or on Instagram, that there'll be some AI personalities among them. Because we can already build AI systems that are fun to talk to, can be very charismatic. And now we're integrating text with video and images. I think we've got all the ingredients for a new age of AI celebrity culture, which is both exciting and a little bit worrying.

Comments

Add a comment