This week, robots have taken over the Naked Scientists! Okay, not really but we are looking into the world of robotics to find robots that can clean your floor, disarm bombs and wage war on our behalf. We find out about 'Curious George', a robot that can locate objects in the real world even though it's only ever seen them online, and ask if artificial intelligence will give us free thinking machines or murderous intellects? We also find out about how robots have revolutionised the study of genetics, learn about a mini movie showing the formation of blood platelets in real time, and uncover the oldest human remains ever found outside of Africa. Plus, we explore how a lightning strike acts as a particle accelerator, the science behind the perfect cake mix and in Kitchen Science Ben and Dave explain the principle behind a robot's knees - by showing you how to make an electromagnet!
In this episode
- Can old people not use mousepads?
Can old people not use mousepads?
As you age, your skin tends to get thinner, so it's unlikely to be the same effect as with the guitar string callouses. In fact, thinner skin should be picked up more easily by the trackpad system. Obviously, it depend on your life history - maybe your mature friend is a thrash metal guitarist?
- Focusing Under Water?
Focusing Under Water?
This question was answered by Professor Ron Douglas...
It is true that amphibious animals, such as ducks, seals and turtles, can see well in both air and water. For humans, however, the world becomes all blurred as soon as we stick our heads under the water. This is because in animals such as ourselves that live in air, two parts of the eye focus light: the lens within the eye, and the cornea, which is a transparent window at the front.
Of these, in humans, the cornea does about three quarters of the focussing because there is a large difference in refractive index between the air and the cornea.
The lens in our eyes is relatively flat, and is mainly responsible for fine focussing of the image, as we look at things at different distances, by slightly changing it's shape, becoming fatter as we look at closer objects.
Our world becomes blurred underwater because water and the cornea have very similar refractive indices, so the cornea no longer focuses light. We therefore become very long sighted under water, as our lens is not optically strong enough to focus the light.
What something like a duck does, therefore, is when it is in air, it has the same basic eyes that we do; with a cornea that focusses most of the light, and a flattish lens. When it goes under water, however, when the cornea no longer focuses light, it pushes its soft lens against a quite hard iris, and part of the lens bulges through the pupil, forming a sort of nipple on the front surface of the lens.
This acts as a very powerful lens, and allows the animal to see underwater, when the cornea isn't working as an optical surface. This allows diving birds, for example, to both successfully hunt for fish underwater, and to catch the bread that you throw for them on the surface.
Interestingly, there is a group of humans that seem to see quite well underwater; these are the Moken, who are wandering sea gypsies inhabiting the coast off Thailand and Malaysia. They make a living by diving in the sea, often without goggles to harvest things like abalone.
It turns out that when you compare their ability to see detail underwater to a similar group of Europeans, the Moken do much better. Any camera enthusiast will tell you that if you want to see a large range of distances in focus, in other words, to have a large depth of field, you close down the aperture of the camera.
So when the Moken go underwater what they have learned to do is to close down their pupil, giving them a large depth of field, and compensating for the long sightedness induced by losing the cornea as an optical surface under water. Interestingly, given time, European children can learn to do this as well.
01:33 - Oldest Humans Outside Africa
Oldest Humans Outside Africa
Researchers in Tbilisi, Georgia, have uncovered the oldest human remains ever found outside of Africa, a species of Homo which might even have returned to Africa to spawn modern man...
The Georgian National Museum's David Lordkipanidze and his colleagues, working at a site in Dmanisi, have uncovered a number of skeletons dating back 1.8 million years. These early people are smaller than modern humans and seem to have features intermediate between the more advanced hominids that gave rise to modern humans, and the earlier Homo habilis. Their overall brain and body sizes are small, their hands are more primitive and ape-like, but their legs are more in keeping with advanced species indicating they could travel long distances. But what's intriguing is that these people clearly pre-date Homo erectus, our immediate ancestor, the earliest specimens of which date from about 1.6 million years ago in parts of Ethiopia.
So it may be that early hominids like these Dmanisi people, having left Africa many years before, subsequently returned to what is now Ethiopia to give rise to Homo erectus, who in turn evolved into us...
Scientists have worked out why it is so difficult to mix ingredients into a cake. Emmanuelle Gouillart and a team from CEA Saclay near Paris have been studying how things mix together in a bowl. He added black dye to a clear syrup and mixed it automatically with a rod and studied the results.
He found that in the centre it mixed by a process of stretching and folding a bit like how a baker kneads bread, which is a very effective means of mixing. However at the edges of the container the material sticks to the walls so well that it doesn't join in with this process, so it keeps adding lumps of unmixed mixture to the centre slowing down the whole process.
This means that if you are mixing a cake or paint it is important to scrape the mixture from the sides of the bowl and into the centre. Though this sounds obvious having a thorough understanding of how things mix is important to predict how a machine you are building will behave especially if it is on a tiny or huge scale where things don't allways behave intuitively.
"Brain-Clotting" - new movie reveals origin of platelets
A Harvard-based research team have successfully produced a miniature movie of the generation of platelets, the key elements that allow blood to clot.
Tobias Junt and his co-workers used a fluorescent dye to label up platelet-producing cells in the bone-marrow of mice and then watched in real time as the cells threaded thin extensions of their membranes into nearby blood vessels. Once in situ the current of passing blood caused the thin finger-like projections to fragment, producing platelets.
Although the origin of platelets was previously known, the steps involved in their production were not. So the improved understanding brought about by these amazing movies, which are reported in this week's Science, may help in the management of bleeding disorders and other conditions associated with low levels of circulating platelets.
07:16 - Thunderstorms Release Gamma Rays
Thunderstorms Release Gamma Rays
Scientists in Japan have discovered that thunderstorms act as large-scale particle accelerators.
Harafumi Tsuchita of Japan's RIKEN research institute and collegues installed a directional gamma ray detector at a nuclear power plant.
Recently this picked up a 40 second burst of high energy gamma rays with a frequency 40 million times higher than visible light.
By looking at the spectrum of these gamma rays, it looks as though they are being emitted by electrons that are accelerated to nearly the speed of light by the extreme voltages that precede a lightning bolt. When these fast-moving electrons abruptly decelerate following a collision with an atom, the excess energy is released as gamma rays.
The researchers realised that the source of the rays was a thunderstorm because the detector is directional and was pointing directly at the storm when the gamma ray burst was picked up.
Shorter bursts from thunderstorms have also accidently been detected in the past by space-based telescopes which were built to survey space for high-energy X-ray and gamma ray sources. But this is the first time such a long burst has been detected and tied directly to a thunderstorm.
09:47 - Pollution Blood-Clotting Trigger Uncovered
Pollution Blood-Clotting Trigger Uncovered
Scientists have solved a long-running conundrum connecting high levels of air pollution with an increased risk of heart attacks and strokes...
Writing in the Journal of Clinical Investigation, Gokhan Mutlu and colleagues from Northwestern University in Illinois found that mice exposed to airbourne particulate matter formed blood clots much more quickly than normal.
Tests on the animals' blood showed higher levels of the clotting chemical fibrinogen and elevated coagulation factors II, VIII and X.
To find out what was triggering this effect the team tested mice lacking the gene for an immune signalling hormone called IL6. Surprisingly these animals were insensitive to the effects of pollution.
Next the team gave normal mice a drug called clodronate to remove a population of cells known as macrophages from theirlungs. These cells are phagocytes, meaning that they can ingest foreign inhaled material and when they do so they activate and pump out chemical signals including IL6.
The clodronate-treated animals also showed little response to pollution exposure, like their IL6-lacking counterparts. This suggests that when particulate matter in polluted air enters the lungs it is picked up by macrophages, causing them to activate.
The macrophages then pump out IL6, which provokes increases in blood coagulation factors and makes blood much stickier and more likely to clot, which in turn increases the chances of heart attacks and strokes.
Why do metal pots spark in the microwave?
I guess the cast iron pot had a cast iron lid as well?
Microwaves cause electric currents to flow through metal, so you would get a current flowing through both the pot and the lid.
If the pot and the lid were insulated from each other, and there was enough voltage, you would get a spark as the current jumped between the pot and the lid. This would explain the flash, and it may have damaged the enamel on the surface.
The spark is incredibly hot, and so this could have vapourised some of the enamel, which may be the cause of the smell.
It's unlikely that cookware manufacturers use anything too toxic in the enamel, as you may scrub bits loose and they could wind up in the food!
So, although there may have been some enamel in the food, it should be relatively safe, and in such tiny quantities that the food would not have been dangerous.
14:46 - Robot Wars - The history of Robots and Robots at War
Robot Wars - The history of Robots and Robots at War
with Professor Noel Sharkey, Sheffield University
Noel Sharkey is professor of Artificial Intelligence and Robotics at Sheffield University, He's been studying robot for years so who better to ask about how close we are now to seeing the robots of the movies...
Chris - I don't actually know why we call robots, 'robots', where do we get that word from?
Noel - You've asked the right person here. It comes from a play in 1921 by Karl Chapek who was a Czechoslovakian playwrite. The play wasn't great, it was called 'Rossum's Universal Robots', but it debuted all over the world, Tokyo, London, New York, and caused an absolute sensation because it was the beginning of this idea that robots will take over the world and kill everybody. The play ends with all the humans being killed, just one being kept, actually, who was the scientist who could make new ones. But then, one of the robots - they were biochemical, by the way, more like what we call androids, very like humans - and the play ends with the two lead robots, male and female, with the female getting pregnant, and they walk off into the sunset holding hands.
Chris - So why did they decide on the term 'robot'?
Noel - Sorry, the word robot itself it's the Czechoslovakian word, I believe, robota, meaning 'forced labour'.
Chris - That's quite true, isn't it, because that's pretty much how we exploit robots.
Noel - Yes, that's what it was at the time yeah, but they're very different from what we think of a robot now, really. It wasn't the first big tin robot, or anything.
Chris - Now most people are acquainted with the fact that we've got robots in car factories spraying cars, and then spraying naughty pictures on them and spraying over that again as the adverts will have you think, but where else in an exciting context do we find robots today?
Noel - Well we find them all over the place, I'm not sure about exciting contexts, but certainly another one you see in adverts is the Asimo robot, Honda's Asimo robot. You might have seen that, it looks like a little spaceman, a small child. You see it wondering about, going up stairs and meeting with a big spaceman at the Washington museum of science. That robot was very clever because it took them something like $18 million to construct it over a period from the 1980's because we didn't have walking robots. It's a fully formed android, and I've done quite a bit of work with it myself ans walked with it and it is really convincing, it walks like a human.
Chris - Well why is it so difficult to make it walk though, Noel?
Noel - Balance, centre of gravity. And one of the things about Asimo is it's got a backpack where it's computing is, and it's got something called a zero-moment algorithm, which was discovered by an Eastern European which is very important, but where they put all the money was actually on the speed of transmission from the sensors. You can think of them like tilt sensors on a pinball table. So you've got tilt sensors, and they send information to the backpack, the computer. It sends information to the motors and gets them to adjust themselves to get the centre of gravity right. It does this fifty thousand times a second, that's the real magic, it has to be really fast to keep it up.
Chris - So is it, when making robots, is it just a case of trying to copy what humans do or are we trying to be a bit more advanced?
Noel - Well with the walking, we're not quite as good as humans; when I say it walks like a human, if you see it, and actually work with it, it walks like a human who's dying for the loo, it's got that kind of lavatory walk in a hurry to the toilet! So it's kind of an odd walk. We've always been trying to make robots better than humans because we want them to do things that we can't do, obviously like heavy lifting and stuff like that; but in the main you find robots now, not a lot in Britain, we're actually the worst in, well maybe the best in Europe, as we have less robots than anywhere else in the workplace. In Japan, for instance, there's a vast number of robots doing floor cleaning, pool cleaning, window cleaning, all kinds of cleaning. I have a robot vacuum cleaner myself...
Chris - How does that work?
Noel - Well, you just put it on the floor! The big hold up for robot vacuum cleaners since the 1950s was that they couldn't do stairs.
Chris - Like the Daleks! But how do they know where the dirt is?
Noel - Well they don't. But they Gave up on the stair business because somebody have the good idea of saying "we'll leave the stair business for now, and we'll make them small so that people can carry them up the stairs". Mine's like a frizbee, it's a 'Roomba' made by iRobot and what it does is has a lovely spiralling movement round the floor and if it meets an obstacle it avoids it and carries on with the spiral. So sometimes it will cover the same ground again, twice.
Chris - We had a pool cleaner once which had a penchant for cleaning one bit of the swimming pool but it kind of avoided the same bits every time. It was a real pain because you then ended up having to waste loads of time getting the brush out just to clean that bit, so in fact it took almost as long to just do it, and clean the whole pool, than it did to get this blinking robot going. Hopefully they will improve that in the future.
Noel - The roomba does a bit of that, it can get stuck in a corner, you have to keep your eye on it a little, it's not absolutely perfect.
Chris - Now one of the things that you were involved with is robot wars, is that right?
Noel - Yes, that's right, yes.
Chris - One of the things that people have been talking about a lot is getting robots to perhaps go into battle on our behalf.
Noel - Yes, that's correct, yes.
Chris - So how would that work?
Noel - It's not something people are talking about, there's already a lot of robots working in, well there are about 4000 in Iraq at the moment, and an awful lot in Afghanistan. It's very difficult to track the numbers because the military aren't completely forthcoming.
Chris - But what are these robots doing there?
Noel - Mainly, they're doing useful work in bomb disposal, so what we call IED, which is Improvised Explosive Devices. They drive them round and look for explosives, they have a camera on them, they're remote controlled mainly, and when they find one they use the robot to detonate them, or something like that. There's some quite funny stories coming out because the soldiers are treating these like real beings, even though they're remote controlling them. There's a droid hospital where the soldiers take their robots to have them fixed and soldiers what the same robot back, even though they're offered a new model of exactly the same robot. And there are lots of stories of soldiers taking their robots fishing on their day off, they're sitting in the boat and they put the fishing rod in the robot's claw hand. They come very attached to them.
Chris - You'd be worried about it shorting out if it fell in, wouldn't you?
Noel - That's true! But I think they're more attached to them because when people are in danger they're more attracted to the thing they're in danger with. Recently they've sent, but only 4 so far, these bomb disposal robots made by foster-miller called the Talon Sword, that are armed with M24 and M249 machine guns, 50 calibre rifles, grenade launchers, anti-tank rocket launchers and these are still remote controlled. I've seen these and they're deadly, it looks like a small robot wars robot on tracks, but then you see the machine guns and things on top. Also, the army really like them, they're very useful for killing people without actually confronting them.
Chris - Presumably they don't draw a salary, which is quite beneficial.
Noel - They don't draw a salary but they cost quite a lot. Though they're not that expensive, about $15,000 which is about £8000, so you can't just throw them away. They've sent 4 in and there are 80 more on order. Everybody wants one, so they're talking about very significant numbers of them. A new company has just been given an order for 3000 more to go into Iraq, a company called FX Robots. The usual company that does it is iRobots with Rodney Brookes, that's the company who make my vacuum cleaner.
An interesting story about the new robot, because they guy who made it was an employee of iRobot, he left and now he's just got a $180 million contract from the military. IRobot detectives have witnessed him putting stuff into dumpsters and cleaning hard drives, so he's being prosecuted now for stealing their ideas. There's a lot of money at stake. That's the big thing, there are a lot of companies involved here.
And of course we've got the killer robots in the sky, now people may not think of them as robots, like the cruise missile or pilotless aircraft, which have been around in America since about 1918.
The Predator robot will do almost anything autonomously, it found a second in command of Osama Bin Laden and what it did was very clever - I dont like this stuff at all, by the way, if I sound like an evil genius or something I'm not, I dont like this - but what it did was it found him in a car, switched on his mobile phone using satellite technology and as soon as the phone came on the operator, who was 7000 miles away in the Nevada desert, pressed a button and it vapourised the car with two hellfire missiles.
24:15 - Robots in Genetic Research
Robots in Genetic Research
with Sarah Sims & Jonathan Davies, Wellcome Trust Sanger Institute
Meera - This week I'm at the Wellcome Trust Sanger Institute in Hinxton, Cambridgeshire, where DNA sequencing and analysis occur on a huge scale, to help us understand just what out genes do and how they function. This one place sequenced 1/3 of the 25,000-gene human genome. But how did they do it?
In order to have genes ready to be sequenced, you need to insert the genes into a bacterium, for which you already know the DNA sequence. You place this gene insert between a part of the sequence that encodes colour, for example blue colouring, that way when the bacterium multiplies to produce colonies, the bacteria containing your insert will be disrupted in this colouring and remain colourless. This allows you to pick them out from the blue bacterial colonies that don't have your gene insert.
So you then pick out your colonies, break open the cells of the bacteria, extract the DNA and load it onto the machines to be sequenced. But as smooth as that sounds, the stages, such as picking the colonies, can be tedious and time-consuming and can also differ depending on the opinion of the people choosing the colonies. The solution? Get a robot to do this for you!
The robots that pick colonies are basically a big box, in which there's a tray at the bottom to put lots of petri dishes in and a robotic arm that hangs down from the top and seems to slide around. But how on earth does this box actually pick specific colonies of bacteria?
I'm off to meet Sarah Sims who's going to fill me in on just how these robots do their job...
Sarah - Well there's a camera that looks at each of the petri dishes, which have colonies on them, and it can see the colonies. Its been programmed to look at a certain size of colony, whether it's a single colony and what colour it is, because there are two types of colony on there. There's q colony with and insert and one without an insert, and the one without an insert is blue and the robot is able to identify that and doesn't pick blue colonies, just picks the colonies that have the insert.
Meera - Just in the time that I've been in here now how many colonies would you say have been picked?
Sarah - Well it picks about 2000 colonies an hour. Before robots we used to pick about 800 colonies, compared to 2000.
Meera - That's a really big difference...
Sarah - Yes it is. It's made a huge difference with the amount of throughput we can do.
Meera - So, I know that when it comes to picking colonies you need really sterile conditions, because it's so easy for things to get contaminated. How is that managed with robots?
Sarah - Well the room that we're in is in sterile conditions with the air filtered, and also the robots are enclosed in a glass box, which helps prevent any air getting in to contaminate things.
Meera - I don't think I need to say that you're very happy about the introduction of robots?
Sarah - Yeah...yes. With the numbers we have to produce we wouldn't be able to do it without an awful lot more people and an awful lot more lab space.
Meera - So having learned just what a difference robots have made here at the Sanger Institute, I want to know just how they work and how the were designed in the first place. So I'm here with, Jonathan Davies, who's project manager on the robotic team.
Hi Jonathan, you design the robots do you?
Jonathan - Yes, we look at what the people in the labs want to do and then we try and design a robot that'll cope with the tasks that they want to perform.
Meera - Did you design the colony picking ones or the pipetting ones that I've happened to see this morning?
Jonathan - Yes, they were designed by this group.
Meera - What was actually involved in coming up with the designs for these robots?
Jonathan - There was quite a lot of work to do with camera imaging, and looking at colonies. When they pick them by hand, you use a lot of judgement with you own eye and your own hand, but making the robot make those judgements as well, took quite a bit of fine tuning to get it right.
Meera - Would you say the robots are more accurate than the samples being done manually?
Jonathan - I think what's more important is that they do the same task again and again, with the same accuracy. With a person doing it, sometimes you get it spot on, other times it's not quite, but with a robot doing it you get the same result time after time after time. That's usually what's wanted.
Meera - These robots have made a big impact on work here at the Sanger Institute. What's next, what are the future prospects?
Jonathan - We're trying to come up with robots that are a lot more flexible. The ones you've seen where ones designed for one job and they do one job only. They do it very well, but people want to do different tasks depending on what results they get, they may want to change the robot to do it. So they want to add a bit of intelligence if you like, so we're trying to go down that route.
Meera - But is there any risk in developing robots like that, that you might be pushing aside actually people in the labs?
Jonathan - The decisions that these humans make, they're in no way as clever as a human being, they can only make pretty limited decisions to be honest. They haven't got the intuition that a human being has got, which is what you need to get new results and things moving forward.
Meera - It looks like this is only the beginning of robot use in gene sequencing, but I don't think this is a case where robots are going to take over the role of humans. Robotics is merely taking away the drudgery of lab work, allowing teams to spend time on more complicated procedures and theories, and preventing them from getting things like repetitive strain injury...
I wonder if I can get them to design a robot to do my typing for me...
30:52 - Train a Robot? Why bother, when he can just look it up?
Train a Robot? Why bother, when he can just look it up?
with Professor Jim Little and Dr Per-Erik Forssen, University of British Columbia
The Semantic Robot Vision Challenge was set up to find robots which could locate an object in real space, after only seeing it in cyberspace. We spoke to Professor Jim Little and Dr Per-Erik Forssen about their winning robot, Curious George...
Chris - What was the big challenge you were trying to overcome here?
Jim - Well the semantic robot vision challenge was a contest to bring together computer vision scientists and roboticists. The challenge involved a robot learning how to find a group of objects in a room.
Chris - Why's that such a challenge?
Jim - Well we'd like to develop robot home assistants, or intelligent devices to assist in the home and a robot has to know the various and unusual objects that live in a place with us. We've gotten some to recognise particular objects, like a box of tissues or a cola bottle, but to work in a home a robot needs to see and understand objects like chairs and cups and tables and they're much more challenging and interesting to recognise.
Chris - So if a person for instance said, "I'd like a cup", because they're cup is different to every other cup the robot has ever seen, you need a robot that can then intuitively then work out what a cup must be. That sounds impossible, how do you go about doing that?
Jim - Well in this particular case we looked at images we got from the web by looking up the word 'cup' on search engines and we tried to find characteristics that cup images might share, such as the circular opening of the top and more or less cylindrical sides, or the handle on the cups, and these have appearances that we can try to recognise in the images, and then when we go look for them in the room we can find the object by identifying these features again.
Chris - So its just going on google and trawling through images that it sees of things fitting the tag 'cup' and then deciding that must be what a cup looks like. So how does it decode the picture to work out what the parts are, how does it attach the same amount of importance to the hole in the top and the handle to the shape?
Jim - Currently we just use techniques for finding interesting and distinct points on the object. Others groups work more on the shape of the boundary of the object. But we've come a long way towards being able to recognise these distinctive features from different viewpoints and different images, and in the challenge what we did was look for features that showed up many times in different images of cups for example.
Chris - So what if someone was really nasty and they mislabelled a picture of a cup and it's actually a saucer, and it says cup and saucer, but it's only a saucer. Would your robot then be fooled?
Jim - It would, but what it does is it tries to gather lots of evidence so if the features show up many times in the images, it recognises that this feature is useful for cups and that the other one was irrelevant. In fact, going to google to get images means you get lots of images, most of which are useful but not all of them.
Chris - If I could just switch across to Per, what were the major problems you had to overcome to make this happen?
Per - You have this problem when you search the internet, you have many images that match the same tag and we've tried various ways of filtering out the bad images, like if you have a cartoon of an object or if you have a person drawing something by hand, it doesn't match as well with the real world, so that was one big problem we encountered. Another thing was when we actually went out into the environment and started looking for the objects; we had to somehow limit the search so we had our robot being interest-driven. The problem with the environment we had in the competition was that it had many interesting things, so the robot looked at other things than the object.
Chris - Jim, when you actually did the semantic robot challenge, what was the competition like? What were other people wheeling out?
Jim - There were other small robots. Ours was large; we had a large robot with a stereo camera on it, a simple still image camera, all on a pan tilt unit. The other competitors also had many different cameras and we all had small platforms that allowed us to walk around amongst the tables that composed the contest region. The object we were looking for were either on the tops of the tables or on the floor, kind of separated form each other, scattered around the room.
Chris - How successful were your robots? Presumably they put things there they had no chance of ever having seen so would have to teach themselves to recognise it so how successful were you?
Jim - We did well. There were 15 objects we were asked to find and of that 15 we found 7. 'Finding' as object means returning to home base with a picture of the object and a rectangle drawn on the image to say exactly where the object is. We did very well on specific objects, like brands of particular potato chips or chocolate bars. Much harder though is to find generic objects like red peppers, or cups, or vacuum cleaners. We succeeded in getting a red pepper but we think it was by accident because happened to find a picture on the web that looked very similar to the pepper that we actually found.
Chris - So home-help robots but not for people who happen to have a whole shelf of pepper, not for people in Italy then?
Jim - haha, apples are hard too...
How do mouse pads work?
I think that trackpads on laptops work by having two sets of wires, one running horizontally and one vertically, and they they look at the capacitance between the two of them - so if you make a voltage with one of them, how much is that voltage picked up by the other one. If you put your finger near it, this changes quite considerably, in fact if you put anything conductive near it it will change. The problem is that if you've got thick, dry, calloused skin on the tip of your finger, this acts as an insulator and stops this effect from happening. The ipod click wheel is either more sensitive, or possibly pressure sensitive.
39:32 - Intelligent Items or Malicious Machines? Artificial Intelligence Examined
Intelligent Items or Malicious Machines? Artificial Intelligence Examined
with Professor Nigel Shadbolt, University of Southampton & President of the British Computer Society.
Professor Nigel Shadbolt is the President of the British Computer Society - he gave a talk at the BA festival of science asking examining artificial intelligence titled 'Free thinking Machines or Murderous intellects'. Scary stuff indeed...
Chris - Should we be scared of robots?
Nigel - the problem is, when we look at our film portrayal of robots, they're invariably murderous intellects and up to no good. That's the kind of popular image of robots and artificial intelligence (A.I), and in fact, somebody once defined AI as the art of making computers behave like the ones in the movies. But actually, it isn't anything quite as malign or as bad as that.
Chris - What exactly is artificial intelligence?
Nigel - Artificial intelligence is a branch of study where you're trying to understand the nature of intelligence by building computer programmes, trying to build adaptive software systems, trying to take hard problems about the way in which humans and other animals see and understand the world and build computers capable of replicating some of that behaviour.
Chris - Basically you're trying to capture the workings of the human brain in a computer programme?
Nigel - Well that's certainly one of the ambitions, although many people working in the area say 'there's lots ways of being smart, that aren't smart like us, so in fact we would be happy to build systems that display adaptivity but perhaps aren't necessarily developed on the way humans operate.
Chris - So what are you doing to try and create programmes that can do this?
Nigel - Well in fact, the history of AI's a very interesting one. Again if we look at the film portrayal, one of the earliest and most famous AI computers was HAL, the robot in 2001: a Space Odyssey, which was made in 1968.
Chris - It shot someone out of a space station didn't it?
Nigel - Well yes, it had a space hotel, it had us going to sleep in cryogenesis-we haven't got that either, so predicting the future can be a bit dodgy, but HAL was aware, he was reflective and he turned into a murderous, paranoid killer in the end. But the bit that the film got right was the chess playing. In fact, AI's chess programmes beat the world champion back in 1997, at the time that happened people said it was crisis of the species but in fact, what it showed us was that huge increases in computing power, plus a little knowledge and insight can really tackle very challenging programmes and problems indeed.
Chris - So what are the big things people are working on now? People are trying to crack in order to develop better robots?
Nigel - In AI in general, with this brute force approach with the amount of computing power, we can do a whole range of things. In fact, AI is kind of everywhere but not recognised as such. It's in your car engine, your engine management system, there are rule-based systems thinking about whether your systems running properly, in your washing machines giving your spin cycle a ride, translating languages in your google search...lots of very mundane AI.
Chris - So this is machines actually watching what's happening and reacting to what's changing and learning from their experiences so they do the right thing the next time?
Nigel - Absolutely right. But of course it doesn't accord with our popular image of AI, but this is assistive intelligence, its there supporting us in particular tasks. What we haven't got are these general purpose robots that are able to reflect across a whole range of problems and the person who was talking about the Semantic vision robots was making exactly that point: it's hard to make programmes that operate routinely across many problem areas.
Chris - So do you, when creating robots, always engineer an 'off' switch, so there's no risk of these things running a mock and taking over the world, which is what people are most frightened about?
Nigel - If you look at the commercialisation of robots, there's a company in the states I-Robot that manufactures simple house-cleaning robots that'll trundle around. They're particularly good in big, open American homes, trundling around hovering up the debris or cleaning the bottom of the pool but really that's a composition of some simple behaviours. It avoids colliding with objects; it can more or less build a simple map of its environment. The question is of course, that when you take that same technology and put it into a weapons platform or into some of the more military contexts, you would want human control and this is exactly what we see in the modern deployment of robots in the battlefield. Making sure that we, the human designers, understand the ethical implications and how we build the override and safety into these systems is a hugely important question.
Chris - How far are we away from having a system where I could have a conversation with a robot, or I could be a robot presenting this programme and no one would know?
Nigel - Well this is a famous Turing test. Alan Turing, a great computer scientists, actually helped crack the code in the second world war using computing techniques. He was hugely interested in AI and he said that if we ever got to that stage, effectively we'd have built an artificial intelligence, but there are many situations where programmes can do a good job of emulating a human but not across the whole range of behaviour and the great thing, of course, about human beings is that we are able to anticipate the unexpected, the kind of snag that would exactly crop up in making a show like this, so I would think that your job is safe for little while yet.
Do you get wetter if you run or walk through the rain?
The simplest answer is it depends on the rain, but if we assume that the rain is falling constantly and is falling straight down and you don't change shape as you walk, you will get wetter if you walk.
This is because the rain can hit you in two ways, it can hit all your horizontal surfaces by just falling on them, or it can hit your front by you walking into it. If there is no wind, the amount of rain that hits you on the front is just dependent on the amount of space you walk though, so how far you walk. If we assume that this is a constant the only thing that will change is the amount of rain that actually falls on your horizontal surfaces which is just dependent on how long you are in the rain for, so running probably does make sense.
Of course in the real world everything is a bit more complex, if there is a wind moving with you it may pay to move a little slower as you can get to where you are going without walking though any rain, just have it fall on your head by walking at the same speed as the wind.
Also if you are in a very short thunder shower that will finish quickly it may pay to move slowly during the shower as you won't run into any rain while it is at it's heaviest. Although I still think in general running is your best strategy, unless you trip up and twist your ankle...