Intelligent Items or Malicious Machines? Artificial Intelligence Examined

Professor Nigel Shadbolt is the President of the British Computer Society - he gave a talk at the BA festival of science asking examining artificial intelligence titled 'Free...
23 September 2007

Interview with 

Professor Nigel Shadbolt, University of Southampton & President of the British Computer Society.

Share

Professor Nigel Shadbolt is the President of the British Computer Society - he gave a talk at the BA festival of science asking  examining artificial intelligence titled 'Free thinking Machines or Murderous intellects'. Scary stuff indeed...

Chris -   Should we be scared of robots?

Nigel -   the problem is, when we look at our film portrayal of robots, they're invariably murderous intellects and up to no good. That's the kind of popular image of robots and artificial intelligence (A.I), and in fact, somebody once defined AI as the art of making computers behave like the ones in the movies. But actually, it isn't anything quite as malign or as bad as that.

Chris -   What exactly is artificial intelligence?

Nigel -   Artificial intelligence is a branch of study where you're trying to understand the nature of intelligence by building computer programmes, trying to build adaptive software systems, trying to take hard problems about the way in which humans and other animals see and understand the world and build computers capable of replicating some of that behaviour.

Chris -   Basically you're trying to capture the workings of the human brain in a computer programme?

Nigel -   Well that's certainly one of the ambitions, although many people working in the area say 'there's lots ways of being smart, that aren't smart like us, so in fact we would be happy to build systems that display adaptivity but perhaps aren't necessarily developed on the way humans operate.

Chris -   So what are you doing to try and create programmes that can do this?

Nigel -   Well in fact, the history of AI's a very interesting one. Again if we look at the film portrayal, one of the earliest and most famous AI computers was HAL, the robot in 2001: a Space Odyssey, which was made in 1968.

Chris - It shot someone out of a space station didn't it?

Nigel -   Well yes, it had a space hotel, it had us going to sleep in cryogenesis-we haven't got that either, so predicting the future can be a bit dodgy, but HAL was aware, he was reflective and he turned into a murderous, paranoid killer in the end. But the bit that the film got right was the chess playing. In fact, AI's chess programmes beat the world champion back in 1997, at the time that happened people said it was crisis of the species but in fact, what it showed us was that huge increases in computing power, plus a little knowledge and insight can really tackle very challenging programmes and problems indeed.

Chris -   So what are the big things people are working on now? People are trying to crack in order to develop better robots?

Nigel -   In AI in general, with this brute force approach with the amount of computing power, we can do a whole range of things. In fact, AI is kind of everywhere but not recognised as such. It's in your car engine, your engine management system, there are rule-based systems thinking about whether your systems running properly, in your washing machines giving your spin cycle a ride, translating languages in your google search...lots of very mundane AI.

Chris -   So this is machines actually watching what's happening and reacting to what's changing and learning from their experiences so they do the right thing the next time?

Nigel -   Absolutely right. But of course it doesn't accord with our popular image of AI, but this is assistive intelligence, its there supporting us in particular tasks. What we haven't got are these general purpose robots that are able to reflect across a whole range of problems and the person who was talking about the Semantic vision robots was making exactly that point: it's hard to make programmes that operate routinely across many problem areas.

Chris -   So do you, when creating robots, always engineer an 'off' switch, so there's no risk of these things running a mock and taking over the world, which is what people are most frightened about?

Nigel -   If you look at the commercialisation of robots, there's a company in the states I-Robot that manufactures simple house-cleaning robots that'll trundle around. They're particularly good in big, open American homes, trundling around hovering up the debris or cleaning the bottom of the pool but really that's a composition of some simple behaviours. It avoids colliding with objects; it can more or less build a simple map of its environment. The question is of course, that when you take that same technology and put it into a weapons platform or into some of the more military contexts, you would want human control and this is exactly what we see in the modern deployment of robots in the battlefield. Making sure that we, the human designers, understand the ethical implications and how we build the override and safety into these systems is a hugely important question.

Chris -   How far are we away from having a system where I could have a conversation with a robot, or I could be a robot presenting this programme and no one would know?

Nigel -   Well this is a famous Turing test. Alan Turing, a great computer scientists, actually helped crack the code in the second world war using computing techniques. He was hugely interested in AI and he said that if we ever got to that stage, effectively we'd have built an artificial intelligence, but there are many situations where programmes can do a good job of emulating a human but not across the whole range of behaviour and the great thing, of course, about human beings is that we are able to anticipate the unexpected, the kind of snag that would exactly crop up in making a show like this, so I would think that your job is safe for little while yet.

Comments

Add a comment