What is artificial intelligence?

A beginner's guide to AI...
17 October 2017

Interview with 

Peter Clarke, Resurgo Genetics, Simon Beard, Centre for the study of Existential Risk


Cartoon robot


To get a basic introduction to artificial intelligence, Georgia Mills spoke to Peter Clarke from Resurgo Genetics.

Peter - Artificial intelligence: the standard sort of definition is that these are computational systems, artificial systems that are showing behaviours that we would attribute to intelligent things. Having machines which can exhibit behaviours which we would attribute just by our definition, the human word intelligence. It’s a fairly broad brush.

Georgia - There seems to be a lot of AI use. A company will say we use AI to do this but then in a film it’s like AI is this big robot running around taking over the world. So what’s the difference between the things we’re using now and, I guess, what the media considers as AI?

Peter - There’s the kind of sci fi version of AI which is the Terminator roaming around and hunting you down and some sort of super intelligence controlling everything, which really is far off in the future at the moment but we can see it on the horizon. We can think about it but it’s not something that’s immediate. Whereas there is AI that’s touching all of our lives every day. People have stopped worrying about spam in their emails because there’s these machine learning algorithms. At what level of intelligence would you classify it as intelligent? In some ways everyone thinks of AI as being the future but phones that recognise you, things like Alexa, ten years ago that would be considered to be a future intelligence, a sci-fi thing. We just don’t know necessarily how fast or how accurately things are going to progress over the next few years.

Georgia - Simon, is this something you think about when you’re considering the risks? Are there different types of AI?

Simon - It’s not so much different types but for the purpose of understanding the risks associated with AI it is useful to make a couple of distinctions. Pretty much every form of AI we have at the moment is what we classify as narrow AI. That means we've developed an artificial intelligence but we’ve developed it to do something really quite specific. It can learn and it can be creative and do all sorts of things but only within that narrow demain. So a chess robot can play chess, a Go robot can play go. A Go robot can’t play chess, and visa versa.

Now much of the risk that we talk about is actually associated with a slightly different concept which is general AI. And that’s AI that’s had all these capacities of intelligent systems and can apply them to any domain without restriction. So that’s intelligence that has the same sort of features that human intelligence has, that we can learn something in one field, apply it to a different field. We can do different things, we can do different things at the same time and so on, and there’s really no restriction on what we can and can’t do.

Then there’s the idea of super intelligence. Now super intelligence is, by definition, general artificial intelligence, but it’s general artificial intelligence that is better than humans. That is, its problem solving capabilities are better, its ability to coordinate between different intelligences is better, its creativity is better. And it’s when you get to super intelligence that you then get this risks of well, if it can do better than we can and it decides to do anything that might not be in our interest. Not necessarily with any malice whatsoever, it may well be doing exactly what we told it to do but we still might not be able to adapt or to respond effectively, and we might find ourselves on the losing end of a really big problem that we are not able to solve.

Georgia - And I know we’ll be discussing the risks in a little more detail later.

But Peter, how does this actually work then this AI; narrow and general?

Peter - With narrow AI, you’re really giving it a task and these systems can learn to perfect that task. For example, playing Go which is some recent work that can become very good at specific tasks and they do surpass human capability on those tasks. But what we’re moving towards is that rather than learning a particular task, what you want to do is you want to learn the world, and it’s having a model of the world. We can see the light at the end of the tunnel or the darkness at the end of the tunnel, maybe it’s the train coming towards us but we can see that coming. We have to get ready for it  because it could come a lot quicker than we expect or it could be quite slow, but we need to prepare.


Add a comment