How worried should we be about AI?

How worried should we be about AI?
17 October 2017

Interview with 

Simon Beard, Centre for the Study of Existential Risk, Peter Clarke, Resurgo Genetics

Share

The panel discuss the near and future risks AI might pose to humanity. Chris Smith began by asking Amazon's Alexa if she had any nefarious plans...

Alexa - I have nothing to do with Skynet.

Chris - It does at least have a sense of humour. In the meantime, we asked Dave Palmer who you heard from earlier - he’s from Darktrace - what he thought about the dangers of AI.

Dave - There is absolutely no remotely imminent technologies or research that is going to create something that is societally damaging, or create a self-aware robot that could do us some harm. But there are plenty of things we should be worried about that perhaps evil people might do. Things like weaponised drones or some of the potentially negative side effects of things like gene editing and DNA editing are far more concerning than the rise of the Terminator. We’re not going to see that in my lifetime - no way.

There are many people, including a professor that I respect enormously, that would say AI is the next electricity. Steam power was the first industrial revolution, electricity the second industrial revolution, computing the third. AI is probably going to be the fourth and I would agree with that. What we’re seeing is the emergence of techniques that allow us to deal with really complex things that were previously out of the reach of what we could do with computers and programming. And it’s near impossible to extrapolate where that ends up but it’s an enormously exciting time too.

I’m sure in 10 years, just as we’ve got completely used to and normalise the fact that we’ve all got these little smartphones, supercomputers in our pockets, and they don’t seem remarkable any more. Well maybe in 10 to 15 years we’ll feel the same way about AI, that it will have changed how we interact with each other and also how we interact with the world, but pretty much all for the better.

Georgia - And the question we’re thinking about at the moment: the risks of AI. Simon; you work at the Centre for Study on Existential Risk, so I think this is one for you. How do we assess the potential risks of AI?

Simon - The first thing I want to say is just going back to the question you asked, how much should we worry about AI? I think worry is a very unhelpful response to the risks of AI. It’s really hard to assess the risks of AI. What we do know for sure is that there are possible bad outcomes that could occur from developing AI. None of those are anything to do with the Terminator or other stories; those are just stories about people. It tells us a lot about ourselves but next to nothing about AI and they don’t really appear on our radar, but there are lots of things that could go wrong.

Lots of those, particularly in the short to medium term are indeed, as David Palmer was saying, about the interaction between people and AI. We get things wrong, AI could give us the potential to get things wrong so much worse, just like nuclear weapons do. But, worrying about that isn’t necessarily going to make it less likely so that’s not the response we need to have. What we need to do is to get enough clever people working on how to prevent bad things happening, to stop them before they happen. And that’s the key thing about centre like mine is just to solve these problems before they become problems and no-one has to worry about them.

Georgia - Can you give me an example of one of these potential problems and how we might reduce this risk other than just running to the hills?

Simon - One very specific example that we’re concerned about is the use of AI and other algorithms in the modernisation of nuclear weapons. This is a great one for us because it’s the interaction between two existential risks - artificial intelligence on the one hand and nuclear weapons.

We know that lots of states, the US in particular, are going through a process of modernising their nuclear weapons launch systems. They are currently very technologically stuck in the 1970s. Algorithms have the potential to greatly increase the efficiency of those systems and make the much better according to the kind of “game theory” models that nuclear weapons systems are based around. But historically we’ve seen there have been too many near misses where it’s been down to individual discretion and people have made, what may at the time have looked like the wrong choice, to avert a nuclear counter strike and later it’s emerged that the technology was faulty.

That’s one we’re looking at right now where there is the potential, if this goes wrong, for AI to actually cause a lot of damage in the very short term. But to see it is to realise the problem and hopefully therefore to avert it. So don’t worry about it, but that’s just an example of what we need to avoid.

Georgia - Peter?

Peter - I think that there are so many potential risks, I don’t think we can say don’t worry about it. I think even with respect to jobs, yes we can all get jobs looking after each other but, at the end of the day, the thing that drove urbanisation and industrialisation were people working to make things. Once you take that away, you are taking a large proportion of human economic value in that standard system away. So I think there are dangers along that road and I think that there are also dangers on any power structures. Putin came out and said that there was this AI arms race starting and however won it was going to rule the world, and we’re potentially entering into a new type of military arms race and we don’t necessarily know how that’s going to come about. There are already weaponised artificial intelligence systems with robots and drones and things like that. And you can imagine automatic targeting, all the technology exists to do that already.

Chris - I’ve got a tweet here - you’re talking about weaponising things. John Hancock says @nakedscientists: if the terminator is possible and it hasn’t been back for Trump, that means there’s worse coming.

Georgia - Well speaking of Twitter, there was the idea that AI  in the form of twitter bots might  have actually impacted on the US election which is quite a scary thought.

Simon - Yeah, I think that is very interesting. That’s a whole interesting area that it does come down to people and people were driving those things, and they were using systems to shift democratic process.

Comments

Add a comment