Jose del Millan, Michele Tavella, Ecole Polytechnique Federale de Lausanne
Chris - I've just met a robot driving itself down a hall pursued by two people, one of whom was wearing a hat covered in electrodes. The other was Swiss Scientist Jose del Millan...
Jose - There is a general problem of providing people the capability to control prosthetic devices, like small robot wheelchairs, by decoding the electrical activity of the brain that they can generate spontaneously as they execute different mental tasks related to motor movements. So this is the general problem. The specific progress that we are reporting here today is the fact that through appropriate probabilistic methods, we are able to decode when people want to intentionally deliver the command and when they don't want to deliver any commands. So the robot keeps doing whatever it is doing, for example moving forward or staying stationary in front of a target.
Chris - First of all, letís look at how you actually use the power of thought to control a robot. How are you actually doing that?
Jose - What we do is to record the so-called electroencephalogram, this is done by simply putting in contact some electrodes on top of the head. Then we are using sophisticated algorithms to find patterns, prototypical patterns of electrical activity in the brain that are associated with the different mental commands that people want to deliver to the prosthetic device.
Chris - And Michele, you're wearing a hat with all these electrodes on. How does it work?
Michele - Well, the hat is very simple. Itís a very light hat. We use some gel to be sure we provide a contact and by imagining moving my right hand or my left hand I can give right and left hand commands. As soon as the computer accumulates enough evidence about the fact they're really willing to perform right or left then a command is delivered, and the robot turns, for instance.
Chris - So you think ĎI want to move my right handí, this changes the activity of certain parts of the brain which is reflected in a change in the signal picked up from the scalp by the electrodes, the computer can decode that.
Michele - The most important thing is that Ė what you said is that this process of imaging these movements is really, really easy. Itís really, really spontaneous. Everybody can do it quite quickly. We don't need long periods of training actually. Usually a couple of hours are enough.
Chris - How do you then translate what the computer hears in terms of the electric signals, into what the robot does?
Michele - We continuously classify the information extracted statistically then a command is delivered to the robot. But the robot is smart, so itís not simply executing it. It has some extra sensors that are designed to allow easier navigation. For instance, weíve put in cameras and we have an obstacle detection system. The robot tends to dock itself automatically as soon as he understands if itís in front of a person, and we have other kinds of proximity sensors to avoid obstacles automatically.
Chris - In the real world of course, someone wouldnít be able to think in excruciating detail all the time about the trajectory or the movement they want to make. So the robotís sort of doing some bits automatically and you can influence them.
Michele - Exactly. So itís not that we are competing. I'm not competing with the robot, but the robot is providing a continuous high quality aid to my task of navigating the environment and Iím interacting with this modality of automatic navigation.
Chris - How long did it take you to learn to control the robot like this?
Michele - The first time I tried, when we were building all the system, it took me a couple of hours to be able to control it well, robot wheelchairs, keyboards, whatever. And itís not just for me. I mean, I'm here today but we have many more subjects and patients that can actually control the very same devices with very high accuracy, and speed.