Science Interviews

Interview

Sun, 23rd Sep 2007

Intelligent Items or Malicious Machines? Artificial Intelligence Examined

Professor Nigel Shadbolt, University of Southampton & President of the British Computer Society.

Listen Now    Download as mp3 from the show Robots and Artificial Intelligence

Professor Nigel Shadbolt is the President of the British Computer Society - he gave a talk at the BA festival of science asking  examining artificial intelligence titled 'Free thinking Machines or Murderous intellects'. Scary stuff indeed...

Chris -   Should we be scared of robots?

Nigel -   the problem is, when we look at our film portrayal of robots, theyíre invariably murderous intellects and up to no good. Thatís the kind of popular image of robots and artificial intelligence (A.I), and in fact, somebody once defined AI as the art of making computers behave like the ones in the movies. But actually, it isnít anything quite as malign or as bad as that.

Chris -   What exactly is artificial intelligence?

Nigel -   Artificial intelligence is a branch of study where youíre trying to understand the nature of intelligence by building computer programmes, trying to build adaptive software systems, trying to take hard problems about the way in which humans and other animals see and understand the world and build computers capable of replicating some of that behaviour.

Chris -   Basically youíre trying to capture the workings of the human brain in a computer programme?

Nigel -   Well thatís certainly one of the ambitions, although many people working in the area say Ďthereís lots ways of being smart, that arenít smart like us, so in fact we would be happy to build systems that display adaptivity but perhaps arenít necessarily developed on the way humans operate.

Chris -   So what are you doing to try and create programmes that can do this?

Nigel -   Well in fact, the history of AIís a very interesting one. Again if we look at the film portrayal, one of the earliest and most famous AI computers was HAL, the robot in 2001: a Space Odyssey, which was made in 1968.

Chris - It shot someone out of a space station didnít it?

Nigel -   Well yes, it had a space hotel, it had us going to sleep in cryogenesis-we havenít got that either, so predicting the future can be a bit dodgy, but HAL was aware, he was reflective and he turned into a murderous, paranoid killer in the end. But the bit that the film got right was the chess playing. In fact, AIís chess programmes beat the world champion back in 1997, at the time that happened people said it was crisis of the species but in fact, what it showed us was that huge increases in computing power, plus a little knowledge and insight can really tackle very challenging programmes and problems indeed.

Chris -   So what are the big things people are working on now? People are trying to crack in order to develop better robots?

Nigel -   In AI in general, with this brute force approach with the amount of computing power, we can do a whole range of things. In fact, AI is kind of everywhere but not recognised as such. Itís in your car engine, your engine management system, there are rule-based systems thinking about whether your systems running properly, in your washing machines giving your spin cycle a ride, translating languages in your google searchÖlots of very mundane AI.

Chris -   So this is machines actually watching whatís happening and reacting to whatís changing and learning from their experiences so they do the right thing the next time?

Nigel -   Absolutely right. But of course it doesnít accord with our popular image of AI, but this is assistive intelligence, its there supporting us in particular tasks. What we havenít got are these general purpose robots that are able to reflect across a whole range of problems and the person who was talking about the Semantic vision robots was making exactly that point: itís hard to make programmes that operate routinely across many problem areas.

Chris -   So do you, when creating robots, always engineer an Ďoffí switch, so thereís no risk of these things running a mock and taking over the world, which is what people are most frightened about?

Nigel -   If you look at the commercialisation of robots, thereís a company in the states I-Robot that manufactures simple house-cleaning robots thatíll trundle around. Theyíre particularly good in big, open American homes, trundling around hovering up the debris or cleaning the bottom of the pool but really thatís a composition of some simple behaviours. It avoids colliding with objects; it can more or less build a simple map of its environment. The question is of course, that when you take that same technology and put it into a weapons platform or into some of the more military contexts, you would want human control and this is exactly what we see in the modern deployment of robots in the battlefield. Making sure that we, the human designers, understand the ethical implications and how we build the override and safety into these systems is a hugely important question.

Chris -   How far are we away from having a system where I could have a conversation with a robot, or I could be a robot presenting this programme and no one would know?

Nigel -   Well this is a famous Turing test. Alan Turing, a great computer scientists, actually helped crack the code in the second world war using computing techniques. He was hugely interested in AI and he said that if we ever got to that stage, effectively weíd have built an artificial intelligence, but there are many situations where programmes can do a good job of emulating a human but not across the whole range of behaviour and the great thing, of course, about human beings is that we are able to anticipate the unexpected, the kind of snag that would exactly crop up in making a show like this, so I would think that your job is safe for little while yet.

Multimedia

Subscribe Free

Related Content

Not working please enable javascript
EPSRC
Powered by UKfast
STFC
Genetics Society
ipDTL