AUTOMATE: The World of Robots

The team discuss robots including the ExoMars Rover, lab robots and voice recognition systems like Siri...
04 March 2014

Share

Robots are under examination this week. Engineer Blaise Thomson, from Vocal IQ, designs speech systems for smartphones, Neil Bargh builds robots for science labs, and Airbus systems engineer Paul Meacham, who is building the next rover that will explore Mars, join Chris Smith, Dave Ansell and Ginny Smith to pit their wits against the assembled Cambridge public, answering questions like how would the Mars rover fare in Robot Wars? Plus, we make a motor from scratch and find out what happens when we dunk electronic devices in liquid nitrogen...

In this episode

CPU cooler with heatsink tower and cooling fan

Freezing Electronics, in liquid nitrogen

Pi - the talking robot from Heriot-Watt University's Robokid Project

03:52 - Robots that talk back

Blaise Thomson explains how he designs spoken language interfaces to allow computers to speak to us...

Robots that talk back
with Blaise Thomson, Vocal IQ

Blaise Thomson from Vocal IQ, a spin off company from the University of Cambridge, Pi - the talking robotexplains to a live audience at the Cambridge Science Centre how he designs spoken language interfaces to allow computers to understand us and respond appropriately...

Chris - So Blaise, let's kick-off with you. Tell us a bit about what you actually do to try and make computers understand what we say. Why is that so difficult?

Blaise - Well, it's difficult because there are so many ways that we produce language. There are so many different accents and there are so many ways of phrasing things. More problematic actually, people keep coming up with new ways. So, when you think you've got everything sorted then people will start adding some slang that never got seen before. They rephrase what they were saying and also, they speak in ways which are not actually grammatical. So, a lot of the time, people think that we're speaking in full sentences, but of course, we're just speaking in phrases or sections of words, and people have some way of understanding that even if it doesn't really make sense.

Chris - Didn't the people of Birmingham become rather offended recently when it turned out that the speech software on the council's telephone system couldn't understand what they were saying. That is true, isn't it?

Blaise - I believe that is true actually.

Chris -  But why should the system object to the population of Birmingham?

Blaise - Well, it's not a personal thing.

Chris - I should hope not.

Blaise - I mean, of course, there are some engineers who might have personal vendettas against the population of Birmingham. But in this case, it's probably because the computer hasn't seen any examples of people speaking in that particular way. And that particular way is possibly so different from what it has seen that it found it difficult to understand. So, I suppose what you need to understand is that the way these systems understand language is that they look at examples of what they've seen before and they look at examples of how people have spoken in a database that they've seen. So, they'll look through a whole collection of sounds. So for example, we have sounds like Aah, Bah, Kah, etc and each of those sounds has a different representation of a soundwave. And so, when you look at the soundwave and you do some analysis of it, each of these different sounds has a different soundwave that comes out. Effectively, what's happening is, the computer is trying to find the most likely soundwave or the most likely sounds for a soundwave that it gets shown. In the case of the people in your example, probably what happened was that the most likely soundwaves according to what it had seen from other people was actually different to what the person was saying.

Chris - So, it doesn't have a problem with scouser of people going 'eh' at the end of each line then?

Blaise - Well same principle. It wouldn't, as long as it had seen examples of it and I think that's actually true of humans as well.

Ginny - This is actually a really good point for me to do the first little bit of my demonstration. So, what the people from Liverpool sounded like to a computer, we can't really understand, but I've got an example of something that might give you an idea of what they would've sounded like to the computer. I just have to get back to the laptop. Okay, so I want you to listen to this...

(sound recording)

Ginny - Who thinks they understood that? A few people, but not everyone. Did some of you just not understand that at all. Yeah, we've got quite a few people who didn't understand. So, if you imagine you're a computer that's someone from Liverpool, that's what they would've sounded like. But your brains are amazing and they can use experience and learn from it. So, if I carry on playing this...

Recording - Please wave your arms in the air.

Ginny - Now, I'm going to play the exact same thing I played to you before again, but hopefully this time, you should understand it.

(sound recording)

Ginny - Did everyone understand it the second time?

Audience - Yes.

Ginny - So, that's something called sine-wave speech. It's just been put through a sort of scrambling programme to make it sound weird. And to start with, your brain doesn't know what to do with it. It's never heard those kind of noises before. I think I found it sounded a bit kind of like bird song, but I wasn't really sure what was going on. But once you know that it's speech and you know what it should say, your brain can unscramble it and it can figure out what it's saying. And actually, now you've heard one example of sine-wave speech, you should be able to hear almost anything set in sine-wave speech. So, I've got an example of me saying something different...

(sound recording)

Ginny - If you understood that, can you do what it says?

Audience - Clapping.

Ginny - Thank you very much. So, once you've had an example of sine-wave speech to use, you can understand other sine-wave speech and that, I believe is what you're trying to train your computers to do.

Blaise - Exactly. So, computers are exactly like that. The more examples they see of someone speaking, the better they are at understanding. And probably, what happened in that particular case is that they hadn't seen enough examples of people from that area. If they had done, then they'd be able to learn.

Chris - There was one bit of medical transcription I saw and the doctor had actually said into one of these speech interpretation things, "I would partially halve the narcotics" and the computer translated it as, "I would prescribe parties and halve the narcotics." So, it must be quite difficult when you've got words which some people tend to link words together, others speak with their words very clearly separated. How can you get a computer to tell those things apart because the patterns must end up looking quite similar?

Blaise - They do, but there's a process by which the computer can just try out the different alternatives and it'll essentially try every possible alternative and just look at which one is the most likely. So in your case, what's happening actually is that the computer is splitting up its analysis into two parts. So, there's one part which looks at the sound, but there's also another part which look at the sequence of words. And some sequences of words are more likely than others. In fact, that's one place where you can get a lot of context. So for example, if you type into Google or if you type into any search engine, you can just write a few words and then it'll predict another word for you. The same happens on your phone if you use something like swift key or swipe. These are systems where you type in your phone a few words and then it predicts what the next one is to help you go more quickly.

Chris - I've seen a few accidents where that's concerned though.

Blaise - Yeah, so that's exactly the same kind of accident that can happen so the speech recogniser will incorporate that and of course, that can go wrong, but for the most part, it actually helps. So, one big thing I've been looking at is how you can actually incorporate a whole dialogues context and incorporate the fact that you know what the person's been talking about and actually, use that as well as just the most recent words.

Chris - Ohthat's clever. What you're saying is that by looking at the context, in other words the conversation that a person has already had then when it sees a new word, you're narrowing down your search for what the possible words might be based on analysing a bit about the conversation. That sounds a bit spooky though. It sounds a bit like kind of the computer's eavesdropping on what you're saying.

Blaise - Well, I think it's something that we all do. So, when we're talking to someone, we know what we're talking about and what the subject is and so, if there is two possible interpretations then we'll choose the one which makes the most sense in the context.

Chris - Has anyone got any questions for Blaise? We need some questions now on anything to do with electronic technology, speech recognition, your funniest word prediction moment. Anyone got any howlers that they've seen when they've texted something to somebody and something they didn't expect to come up? Yes, who's this gentleman here?

Rhys -  Hello. I'm Rhys Edmunds from Comberton. What is the highest wavelength and the lowest wavelength the computer can transmit?

Chris - Can you just clarify what you mean by transmit? Do you mean as in sounds that the computer can make?

Blaise - To be honest, I'm not really sure what the answer to that question is. I believe it's several kilohertz. So, that's the frequency rather than the wavelength, but there is a range. Actually, the computer can both record and transmit higher and lower frequencies than our ear can hear.

Chris - Yeah, I mean your ear is sensitive to about 50 hertz which is 50 cycles a second. That's the lowest sound-ish that you can hear, about 50. Some people, down to 20 and it's as high as 20,000 hertz when you're very young and most people in this room who are a bit older will stop hearing sounds above 15,000 hertz. In fact, people know that young people can hear some of these sounds better than older people and they're using them as a form of repellent for young people. There is in fact a shop. Is it in Wales Ginny? Do you remember this? They actually had a problem with youths loitering outside so they installed this buzzing system that bleeps at about 18,000 to 20,000 hertz and they can hear it and it's enormously annoying, but the adults don't get deterred from going in and buying their paper because they can't hear it.

Ginny - But then the teenagers got their own back because they started using it as their ringtone because when their phone went off in class, the teacher couldn't hear it going off, but they knew it was. So, they got their own back.

Chris - How is that for ingenuity? Next question. Who else has got a question?

Mark -   Hello. I'm Mark from Cambridge. I was just wondering what's the limiting factor at the moment on the sort of processing of speech? Is it our understanding of how speech works or are we waiting for process of power to increase?

Blaise - I'd say it's more of an understanding thing. So, we already have quite a lot of processing power. So actually, I'd say there are two big things. One is collecting examples of people speaking so that we call that data - a big data source. A big reason for big companies being much better at speech recognition is because they have a lot more of this data. The actual processing power is not such a big problem at the moment. It's very easy for us to get a large collection of servers together, computers together to do the processing. So, there is quite a lot more to do on that understanding of how to build better speech recognisers. And even that's been developing quite dramatically in the last few years. In the last 10 years or so, these systems have become really a lot more useable.

Chris - I've been quite impressed though because I've rung up a number of industries that have automatic telephone answering systems and whilst I find them infuriating, they do work really rather well. And they say, "Tell me in a few words what your call is about" and you might say "My broadband doesn't work" and then it says, "Are you calling me about your broadband?" and you say, "Yes." Well it's very tempting is be incredibly sarcastic, isn't it? Does it understand sarcasm? Does that wash, if you go, "No, I'm not." Will it nonetheless still put you through to the broadband person?

Blaise - I think it probably depends on how the persons designed it, but in most cases, not. So actually, this is another thing we've been looking at, is trying to get computers to learn as well as the analysis of the words, the meanings of sentences. And a big thing for that is trying to work out whether you've done something right or wrong. What we're doing is getting the computer to try out different strategies of what it might do. So, for example, in your case, you might try out and say, "I think you want a broadband" and then see if the person did. And then get the computer to learn by itself what these things mean. And for that, it's very important to see if you did the right thing or if you did the wrong thing because that can inform you about how you should change your strategy in the future. Actually, a pretty good signal for that for example is whether people swear at the system. And generally, that means that you've done something wrong. But actually, to check if you've done something right is a lot more difficult actually.

Chris - Do you remember that Microsoft paper clip that used to pop up? Do you remember that Microsoft paper clip? Who here found that bloody annoying? I saw a gag someone had done once and they were so fed up with it. They'd written this sort of thing saying, "I've decided to kill myself and whatever" and then the paper clip pops up and said, "This looks like you're writing a suicide note. Would you like some help with that?" Are you able to do emotion?

Blaise - It's not something that I've looked at and in fact, generally, people have started looking at this, but it's actually very difficult. Partly because it's very difficult even for humans to decide what the emotion is, just from listening to someone. So, if you get 2 or 3 people and all of them listen to the same segment of audio, some will say that the person is very angry. Others will say they have no emotion and others will say that they're happy. Actually, the agreement amongst humans is very low, around 60%. So, it makes it very difficult for computers then to learn and of course, their agreement is even lower.

Chris - Any other questions coming in so far? There's one gentleman at the back.

Carlo - Good evening. I'm Carlo for Cambridge. I would like to knowledge whether there is any language which is easier to be understood through computers or they are more or less all the same.

Blaise - Yeah, that's an interesting question. So, there is a difference in the ease of understanding. In some respects, English is actually the easiest of all and that's partly because we have a lot of what I call data, a lot of examples because most of the internet is in English. The other languages which are more difficult for various reasons, so one is Chinese and that's more difficult in some ways because of two reasons. One is that there's tonal variations in the understanding of the language. There are examples for example of words which to you or me might sound the same, but one might mean horse and the other mother. If you say the wrong one, you can get into deep trouble.

Chris - In some supermarkets, horse and beef ends up being confused, doesn't it?

Blaise - That's true, but I believe that's not because of the change in tone. So, that's one interesting reason for the difficulty. Another in the case of Chinese is because the writing doesn't actually really segment words in the same way that we do with English. So, there's a different idea of the meaning of a word. The other languages which are difficult for other reasons - so one is Turkish and that's because of the way the words are structured. So in English, we have prefixes and suffixes and that's quite easy to deal with. But in other languages you get what are called infixes, so the words can change because of a change in the middle of the word. That makes things a lot more difficult for various technical reasons because you get these huge explosion in the number of words that are possible.

Chris - Presumably, the system can learn though. Can it not teach itself and slowly get better?

Blaise - Yes. So, that's what we do in all of these cases with Chinese, Turkish, Arabic, etc. All of them have the best systems now, being a system that's been learned automatically. But it becomes more difficult when you have a very large explosion in the number of words or when you start having to add extra things that affect the meaning.

Chris - It's obvious that if you want to make someone understand, you just speak loudly and clearly in English, don't you? Any other questions?

Fergal - Hi, this is Fergal from Cambridge. I'm wondering how the speech recognition works in general. Is it more by pattern matching or does it break all sounds down into basic elements, wavelength, frequency, amplitude, and what are similar parameters make for different sounds and different accents and tones?

Blaise - So, what happens is that it takes a soundwave and then it breaks it up into a sequence of what we call features. So, these are a bit like what you called amplitude and so on. Actually, they're slightly different but effectively, the idea is the same. And then what it does is, it tries to find a sequence of sounds which we call phones. So, these are things like ahh, uhh, buhh, etc. So, there's a collection which pretty much define all the sounds that we can produce as humans, a subset of which is used in English. And then it tries to find the most likely sequence of these phones, given the example of speech that it's been given. The way it decides if something is likely is it looks at the distribution or the patterns that you would usually get for that sound of these features. So, things like the amplitude. The specific ones which are typically used are a little bit more complicated. They're called malfrequency cepstral coefficients. which is...

Chris - Say that again.

Blaise - Malfrequency cepstral coefficient. Also conceptual linear prediction coefficients.

Chris - How it is like going down at parties?

Blaise - It usually kills off all conversation to be honest which is why I usually try to leave it for as long as possible before bringing that in.

Chris - Any other questions?

Jenny - Hi, Jenny from Cambridge. I was just wondering if the video phones were developed more, would that help, especially with emotion? So, would you be able to match image with sound and you'd be able to tell by someone's face, their emotion more?

Blaise - Yes, so video gives a lot of help for emotion detection. It actually helps a few other things as well. So, you wouldn't think it's a problem, but actually, one of the most difficult things to decide is when a computer should talk and when it thinks you're talking. Because if there's noise in the environment, then it's very difficult to know when the person's stopped talking or when they're carrying on talking. And people pause and it's never quite clear whether the computer should start talking now or not. And even people struggle with this because people will interject and fight about who's talking. So if you have a video, it's much easier because you can actually - as well as listening to the audio, you can look at their lips and see whether they're still planning to speak. You can look at their face and that gives some indication also of whether they're planning to speak. So, a video does help for more things than just the emotion. It also helps for other things, for what we called turn-taking.

Chris - Can you also use facial expression to work out what someone might be trying to say because there's this famous McGurk effect, isn't there Ginny, where actually, if you show someone mouthing a word one way, but play them a different sound, the person who's listening can get confused and hear a very different thing?

Ginny - Yeah, exactly. We very much use the shapes being made by the face to help us workout what someone is saying, particularly in noisy environments. So, if you see someone going bah and making the kind of face they make when they make a bah sound, and you hear something that's a bit kind of - it could be a bah, it could be a mah, you'd assume that it's a bah because of what you can see them doing. So, we definitely use that. So, I would've thought that that would be helpful of the computer as well. At least for some of those sounds.

Blaise - Probably, yeah.

A robot in action in the lab

21:53 - Robots in the lab

How do robots help scientists with their work every day in the lab?

Robots in the lab
with Neil Bargh, TAP Biosystems

Neil Bargh who works in research and development for TAP Biosystems explains to the Lab robotaudience at the Cambridge Science Centre how he develops robots for use in the lab and how difficult it is for robots to model the human hand....

Chris - Neil, you make robots to work in the laboratory. So, tell us about them. What do you actually do?

Neil - Our systems do all kinds of different processes in laboratories. We've been making systems for about 20 years doing processing bottles or flasks like these. These are called T-flasks. They hold about 600 ml of liquid. It's about a pint.

Chris - Do you put beer in them then? Is that what you're doing?

Neil - We've had ideas of such things.

Chris - You do fermentation.

Neil - Yes, we do. These are used to grow cells in, all kinds of cells - animal cells, maybe insect cells and the reason that that's done is to help with pharmaceutical research. Now all around the world, there are thousands of biologists and part of their job is to look after their cells. So, once every few days, they'll need to get a collection of these flasks, take them out of an incubator, unscrew the caps, empty the contents of the liquid in there. Basically, give their cells some more food. The cells grow in the flask and eventually, the cells run out of space. So then, you have to take the cells out to the flask, put them into a new flask to give them some more space to grow and keep them happy. Now, the cells grow at a certain rate and sometimes they need to be like fed and watered at the weekend, so people have to then work the weekend to keep looking after their cells.

Chris - That's what graduate students are for, isn't it?

Neil - It is exactly, but once people mature, get older, they don't like spending their weekends going into the lab to do these jobs and that's where some of our systems come in. Our systems can automate the whole process so that scientists can actually use their time more valuably rather than doing the jobs of unscrewing caps and pouring pipetting liquids.

Chris - Isn't it true though that the volumes of liquids that we're testing in laboratories, for many of the modern tests like DNA tests and things are really, really small, and we're doing many, many hundreds of test tubes all in a row all at once and humans are not terribly good at remembering where they got to? You might make a mistake on the 89th tube and they think, "Was it the 88th or 89th? I better start again." Isn't a robot better than that?

Neil - Very much, so. Many years ago, people used to do all of their work in test tubes. As time progressed, we ended up with things like this. This is a plate with 1,536 wells in it. Each well is about a millimetre square and about 5 millimetres deep. To access that, you need some small pipette tips. So in this box, there are 384 pipette tips. So, we make systems that can pipette 384 samples at a time from plates like this into other plates like that for performing experiments on mass.

Chris - Any questions so far about how we can build robots to speed up research in the laboratory?

Ginny - Well, I've got one on the email from Steve Lamble who says, "Why is this technology not used in Amazon warehouses? This was something that was big in the news recently. We're still asking humans to run around with trollies to pick things up." Sounds like the kind of thing, I mean, on a bigger scale, but similar to what you're talking about.

Neil - Yes and actually, I'm surprised that they wouldn't use automated systems in the Amazon warehouse because many warehousing systems are automated. We made a system for the UK Biobank which is for storage of biological samples they're stored at minus 80 degrees. There's potentially millions of samples. The robot is actually working at about minus 20 degrees, but that's an example of an automated warehousing system.

Chris - So, unlike the employees at Amazon who were complaining last year about conditions, robots don't complain. I suppose that's one big bonus, isn't it?

Neil - Indeed. They can work 24 hours a day, 7 days a week without stopping. They need some maintenance but that's of the benefits of automation is that they can work round the clock.

Dave - Is it part of the reason why that's going to be very difficult because there's lots of different objects in the warehouse and you're saying that robots are not that good at dealing with newer, unexpected situations.

Neil - It's certainly a significant factor. Whenever we are looking at a new application, we try and standardise the things that need to be handled and handled as few different components as possible and even try and design the things that we need to handle to make them suitable for robotic application. So, I think you're right, the size and shapes of the packages will be a big factor and if they were to try and automate it, they'd probably need to standardise on a small set of different size boxes.

Chris - So, do people come to you and say, "This is our laboratory. We want you to design us a robot that will do things in our laboratory" or do you make systems and then people buy them and design their lab around your system?

Neil - It's a bit of both. Quite often, we're looking to develop new systems so we talk to people and say, "What is it that you do in your working life and where are the real problems? Where do you spend all of your time?" And so, having those discussions, we identify that people spend a lot of their time doing fairly mundane tasks. We think, well, if we can help with that task, is that worthwhile? And in some of the applications like the automation of cell culture, there's a variety of benefits. It's not just about say, a labour-saving action. When humans perform an operation, there's always some variability from one person to the next person, or what happens on Friday afternoon is often different to what happens on Monday morning, whereas a robot will perform the same task, pretty much identically every time it performs it. So then, you get more consistency of how the cells are actually growing in the flask. Robots also don't introduce contamination. So, that's quite a problem when you're trying to do cell culture, making sure everything stays sterile. There's bacteria everywhere. It's very easy to contaminate samples, but robots can't inherently contaminate it.

Chris - Can't your robots catch computer viruses?

Neil - We try to avoid that, yeah.

Chris - What sort of software do they run on then? How do you programme them?

Neil - There's a variety of different software packages at different levels. So usually, there's a system PC that's running Windows and that then would talk to say, the arm that's got its own controller in, that's sort of coordinating the manipulation of each of each joints to move the pieces around.

Chris - What about actually making those movements though because I have a hand and the most powerful feature of my hand is the fact that I can make my thumb meet the ends of each of my digits. Is that something that's very easy to replicate?

Neil - Not at all. I think generally as humans, we underestimate how amazingly sophisticated our bodies are. So, when you grab something with your hand, you've got an amazing sense of touch in all of your fingers and that feedback with your eyes means that you can look to an object and pick it up. And you don't even think about it. Whereas with a robotic system, on a robotic system that doesn't have a lot of sensory feedback, we might not know that if we picked up a bottle, and it had a missing cap, we might not know that. And then we try to process that and things would go wrong. So, we do have to have a certain amount of sensors on the system.  But relatively, most robot grippers are pretty clumsy compared to our hands.

Chris - Any questions first before we - we've just got one question here.

Jack - My name is Jack and I'm from Comberton. How much would it cost to like make and design one of these robots?

Neil - It depends on the sophistication of the system, but usually, it's millions of pounds of development effort.

Chris - How much pocket money do you get? It depends on how many chores he does? You need a robot for that.

Bea - Hi. My name is Bea and I'm from Cambridge. I was wondering, to what level can these systems be used for pattern recognition within the lab environment? So currently, as you explained, they're doing some mechanical tasks, but when it gets to sort of say, analysing 900 different samples, they might provide a sort of speed of processing that is much quicker and help narrow down the amount of samples that say, people working in the lab would have, to take further.

Neil - Yes, so I think that's more related to say, a vision system and analysis of say a photograph. So, as an example, it might be trying to recognise cancerous cells from non-cancerous cells, and that might be a very good application for helping that kind of activity in a lab. There are quite a lot of robotic systems that do have integrated vision systems on them. An example would be in PCB construction, your printed circuit boards inside phones. There are circuit boards with tiny components on and in the assembly of those, those components are presented to the robot, the robot picks them up with a little tiny suction cup. It'll move it to a camera, the camera looks at it, decides where exactly it is and sets the orientation of it, for them to be able to accurately place it onto the PCB.

Ginny - I've got a question here from Dominic in Cambridge. He says, "Why have robots been used in factory production lines for so many years, but seem to be very slow to appear in people's homes? Where's our robot to do the chores?"

Neil - Yes, if only we have those robots in our homes. I think, a lot to do with the economics of it. So, in industrial processes where there's the same task to be done over and over again, so if you think about car production lines where, if you're making one car a minute then having a robot that can do the same spot welds all the time is a very efficient way of applying a robot to a process because it's the same process whereas in a house, the household chores of washing up and doing the hoovering, dusting the cobwebs, they're very varied and everybody's houses are very different. So, in terms of robots, there are vacuums that will sort of bump around, hoovering the carpet, what does it do on the stairs? It gets stuck.

Chris - They defeated the Daleks didn't they, in the stairs. I tell you what we do want. We want an ironing robot. That's what we want, a robot to do ironing.

Neil - No, but then there's other solutions. It would be much better to have clothes that don't need ironing. So, that would be say, an engineer's approach to solving the problem.

Chris - Have you and Blaise got together to see if his speech recognition could be plumbed in to your robots so that you could actually end up with a robot that would do as you told it?

Neil - No, we haven't but it would be very useful because sometimes in developing systems, you can see something bad is going to occur. If only you could shout, "Stop now!"

Chris - Can you list one of such example?

Neil -  Well, these flasks, they're quite brittle material and sometimes in developing the systems, you leave a flask in a place where it shouldn't be. The robot, the system knows where everything is, but if you open an incubator door and you decide to start moving flasks, and you take the flask out, have a look at it, think that's very interesting and put it back, if the system doesn't know that, it can quite easily just smash one of these flasks completely inside another flask. So it would basically one flask right inside another.

Chris - Whoops! Especially if there's ebola in there or something. That could nasty, couldn't it?

A model of the ExoMars rover

36:55 - Building a Mars Rover

Paul Meacham explains the challenges he faces while working on building the ExoMars Rover...

Building a Mars Rover
with Paul Meacham, Astrium

Paul Meacham, a systems engineer working on the ExoMars Rover explains the A model of the ExoMars roverchallenges he faces when designing a robot which has to find new life on the red planet in 2018...

Chris - Paul you have the amazing party conversation starter of being able to say that you make rovers for far away planets.

Paul - Yes, although it tends to be the social equivalent of Marmite either it starts conversation or kills it dead. So, we have a 6-wheeled vehicle which we refer to as the Mars rover that's going to Mars in 2018. The goal of the mission is to look for signs of life outside our planet. So, whereas previous rovers and they primarily have been American, this is the first European one, have looked for the conditions for life and things like that. We are looking for life directly.

Chris - And what are the hallmarks of life that you'll be looking for?

Paul - Well, it's a little bit difficult to say because when you try and characterise life, you'll find it's actually very, very hard. So essentially, we have an organic molecule detector that looks for very simple molecular structures and we also have a drill on the front of rover that allows us to take samples from 2 meters below the surface.

Chris - like a Black and Decker.

Paul - It's actually more akin to an oil rig actually in the way it works. It has a single drill piece and the extension rods that rotate into place to make a drill that is 2 meters long. And then the drill piece has a camera shutter in the bottom of it that can push the sample in and bring it back up to be analysed by the organic molecule analyser.

Chris - How big is this rover?

Paul - So, it's a medium size Mars rover if there's such a thing. It's about 2 meters tall from the base of the wheels to the top of the mast and about 1.6 meters long. So, it's a reasonably sizable beast and that means it can travel over most of the terrain we expect it to encounter.

Chris - There seems to be a sort of vogue for describing your rover in relation to a vehicle here on Earth. Curiosity famously dubbed the size of a Mini Cooper. So, how big is your one then?

Paul - Well, we're not quite that big. I guess we're sort of the size of...

Chris - Reliant Robin?

Paul - I was thinking more like one of those big lawn mowers that you sit around and drive around on. It's kind of that sort of size I think.

Chris - So, how far down the development process are you with it?

Paul - Well, we've got to the point where we are develop - well, we have several development models to test out the bits of the technology the rover will use that is less mature. Simply because it hasn't been done before. In particular, the autonomy the rover has, the ability the rover has to drive itself across the surface of Mars. We have several prototype rovers that allow us to demonstrate that and practice it, and see what works and what doesn't. But we're actually some way from building the flight rover.

Chris - And you can recreate Mars to test it on, can you?

Paul - We can, yes. So, we have a big Mars yard, a giant sandpit if you like in Stevenage and the rover essentially practices in there. It drives over rocks. It sees how we handle slopes and our prototypes are deliberately developed such that they have the same weight on Earth as the real one is on Mars. So, they behave in the same way as a real one will do when it gets to Mars where the gravity is much lower. And therefore, we can write all our clever software to control it, knowing it'll work on the flight rover when we get there.

Chris - Is the temperature on Mars at night time not close to minus a hundred though?

Paul - It can be slightly lower than that. Yes, I mean, day time temperatures are around zero to 10 degrees which electronics typically like, but night time temperatures as you say can drop to minus 130 and they drop off very quickly because the Martian atmosphere is very thin and it just doesn't have the same heat retention capability that the Earth does.

Chris - But is it not quite good to have cold temperatures for electronics because doesn't the electrical resistance drop when it's nice and cold.

Paul - It can do, but the real problem is that essentially, when you get to the sort of below minus 100, you're starting to see your circuit boards is freezing, solder's going to start to break, even the circuit board will just break in two. And so, that means that it's now dead and you've got no way of repairing it. So, we have to avoid low temperatures as much as possible.

Ginny - So, we have a little demo now to show you exactly what happens to various bits of electronics at very low temperatures.

Dave - So, what I have here is an exceedingly jury rigged circuit. I have some batteries with a little light and a very long coil of wire. In this thermos flask, I have some liquid nitrogen which is sitting a little bit colder than Mars at about minus 196 degrees centigrade.

Ginny - So, you can see. We've just poured some out into a cup and you can see that it's bubbling and that's because it's actually boiling in the same way that your kettle gets lots of bubbles in it when it boils, when it comes up to 100 degrees, this liquid nitrogen will boil at room temperature. All that vapour coming off is that kind of vapour as it's boiling. So, we've got a nice little circuit here. It's a little bit makeshift. We've got some wire and we've got a bright red LED which everyone can see is glowing beautifully. So, what are you going to do with that LED?

Dave - So, the first thing I'm going to do is cool down the wire.

Ginny - So, you're popping the wire into the thermos flask full of liquid nitrogen. We've got beautiful vapour going everywhere and it's making quite a noise, and now it seems to have quietened down. So, what does that mean?

Dave - So now, the wire is sitting about minus 200 degrees centigrade and the wire actually is perfectly fine at this sort of temperature. If anything, LED will have got a little bit brighter because the resistance of copper drops an awful lot when you get down to this sort of temperature, I think even by a factor of 10. But if instead of that, we cool down the LED which is a piece of electronics. It's does work quite so well.

Ginny - That looks so pretty. It's glowing in the cup of liquid nitrogen and it's gone off. It stopped working. Have you just broken it?

Dave - If I take it out again and let it warm up slowly, it comes back on.

Ginny - Yeah, so it's back. It's not quite as bright as it was at the beginning, but it's getting there. Yeah, I'd say it's just as bright. So, it wasn't that you broken it. It's just while it's that cold, it can't work. Why is that?

Dave - So, LED and most of the clever bits of electronics are made out of materials called semi-conductors and these can conduct or insulate and they can have all sorts of interesting properties. But in order to work they need a bit of heat to kind of give the electrons a bit of a kick and let them move around. But as you cool them down, the electrons kind of get locked up and can't flow as an electric current. They basically just stop working. So, if you get a piece of electronics cold enough, it just doesn't work.

Ginny - So, something to avoid on Mars I guess.

Paul - Definitely, yes. So, we have to manufacture the environment that the electronics sit in to be much closer to where the electronics do like to operate. So, somewhere between minus 40 and 10 degrees centigrade. The way we do that is, we put all the electronics in a central core structure which we refer to as the bathtub and the bath tub essentially has the space equivalent of double glazing. So, it has an inner skin and then a cold gas trapped between that layer and the outer skin. Just like double glazing, it stops the heat from escaping and creates a thermal barrier. So, you can control the temperature of the inside of that bath tub to whatever you like. Typically, somewhere between minus 40 and 10 degrees.

Chris - Where does the power come from?

Paul - Well, all our power comes from the solar panels that sit on top of the bathtub and essentially seal it. So, that's our primary power source while we're on Mars. We have a battery as well that charges up in the day to keep the rover alive at night. But in essence, yes, we are completely environmentally friendly and solar powered.

Chris - These solar panels, how long will they keep the rover running for?

Paul - It's an interesting question because one of the big issues with using solar panels on Mars is that they eventually get covered in dust. It's very, very difficult to get that dust off once it's landed on the solar panel. Essentially, your efficiency, the amount of power you're generating from that solar panel drops off over the mission.

Chris - You need a robot to clean them.

Paul - Almost, but then that can get a little bit difficult because you end up scratching the glass because the solar panel is covered in glass and then you have permanent damage, and then you will never generate enough power again. So, it is a problem and we oversize the solar panel to compensate for a bit of that. But fortunately, dust storms which can envelop the whole planet on Mars are seasonal. They tend to happen from the autumn equinox around to the spring equinox. And so, it makes sense that our mission lands at the spring equinox, just after the dust storm season has finished and then our nominal mission should be completed by the time it starts again.

Chris - Questions from the audience. Who would like to ask about building a robot to explore another planet?

Mark - Hi. It's Mark from Cambridge. I was just wondering, given that what can go wrong will go wrong and that Mars is a long way away, do you tend to over engineer the systems to try and ensure that they don't break or does the rover have some ability of repairing itself at all?

Paul - Certainly, we do over engineer it a little bit to guarantee that it's going to work over its nominal mission. But in fact, the main way we avoid problems is to carry two of everything. We have a prime and a redundant equivalent of every single unit on the rovers. So we have two computers, two power distribution units, two sets of sensors, and so on and so on. So essentially, if one breaks you switch on to the B side if you like, and use that instead and they both work in the same way.

Chris - Have you got any spare tyres?

Paul - Actually, we don't use tyres because they're made of rubber and we can't organics to Mars if we're looking for life. So, we actually make the wheels of the rover out of metal and they are sort of like a spring in a wheel. They compress slightly to allow us to climb over rocks and grip properly.

Chris - How do you actually control and steer the rover around on Mars?

Paul - Well, because Mars is so far away and in the worst case, there's a 20-minute time delay. It's not practical to drive the rover by remote control. Even though it's monitored back on Earth, we actually want the rover to make as much of the decisions as it possibly can. In fact, our rover is capable of accepting a target which can be several hundred meters away and is simply a X, Y coordinate and then the rover does everything else itself. It will use its cameras to image the terrain in front of it, identify where the rocks and the slopes are, figure out if certain areas of that terrain are safe or not, plan a path through it and then drive that path all by itself. In fact we'll only see the rover once or twice a day.

Chris - I'm still disappointed that you can't sit in your lab with like a radio control device and sort of think, I'm steering this thing. How far away is it to Mars? How far is your message having to go to get to Mars?

Paul - It varies. So, the closest approach is 36 million miles. The furthest is 250 million miles. So, that's where the 20-minute time delay comes from, even travelling at the speed of light.

Chris - So, 20 minutes for my message to go what I want it to do to get to the rover.

Paul - Yes, that's right. So, if the rover was driving forward and you saw an obstacle you wanted it to avoid, you'd press stop and then 20 minutes later, it would have hit whatever the obstacle was you were trying to avoid because the signal takes so long to get there.

Chris - Any other questions so far? One just at the back, this lady...

Sophia - Hello. I'm Sophia from Cambridge. I would like to know if this rover stays on Mars or do you bring it back?

Paul - Sadly, no. It's staying on Mars forever. The reason for this is quite simple. When you want to take lots of scientific instruments, you want to essentially have as big a rover as you can possibly send, and if you want to bring something back, you have to take the fuel with you to then launch it back off the surface of Mars and back to Earth. And of course, that severely limits the size of the rover, the number of scientific instruments, and so on that you can send. So, it's so simple choice essentially. Do you want to do lots of science or do you want to get the sample back? To this point, all the rovers have been on a one-way trip, but the mission that follows ExoMars is called Mars Sample Return and it's a much simpler spacecraft. It's designed to land, take some samples, and then get back up and back to Earth. But the groundwork has been done by the rovers because they had figured out by travelling large distances, where is interesting and where is not to take samples from.

Chris -  What about making sure we don't bring back something horrible to Earth from Mars? What steps are in place to make sure we don't ruin our planet?

Paul - It's a big issue. It applies in both directions actually. We have somthing called planetary protection that means we can't contaminate Mars. But yes, if we were going to bring a sample back, it can't contaminate the Earth. So, it's likely that the sample container would be essentially launched and then collected in orbit by another spacecraft that has not been down to the surface of Mars. That would then return it back to Earth and it would be sealed in an entry vehicle to get it back down to the surface. So, no part of a spacecraft that has been exposed to the Mars environment has actually been exposed then to Earth.

Chris - So that way, there's no way it can deposit anything into Earth's atmosphere.

Paul - That's right and you'd have about 10 minutes or so on landing to go and find it, collect it, and get it back into a clean environment.

Chris - Before someone else nicks it or...

Paul - Well hopefully not - because of the risk of contamination still.

Chris - Any other questions? Yes, Neil. Go ahead.

Neil - What is the lifetime of a rover on Mars?

Paul - Well, the nominal mission if I can call it that is 218 days or sols as they are on Mars, and that's reasonably typical, the Spirit and Opportunity rovers had a lifetime of 180 sols. Curiosity is about 2 years. So, that's when it has to achieve all its basic science goals. But of course, the mission can go on and will go on beyond that. Partly as a result of the over engineering, but partly because we'll just keep running it until something breaks.

Ginny - Steve on Facebook wants to know, "We've got driver-less trains and you were talking about a sort of driver-less rover, so when are we going to have driver-less cars, and would we really trust them? Can we be sure that they'd actually be safe?"

Paul - It's an interesting question. I mean, we are starting to make steps to that. Google have a car that can do some degree of driving by itself. It's probably only ever going to work if all the cars are driver-less because then there's less unpredictability certainly. But I mean, certain features of it are appearing in cars these days like lane control. So, if you start to go outside of your lane, it'll vibrate to let you know that's happening. So, some features will come in to our cars, but I think we're some way from all of us driving cars that are autonomous.

Chris - Any other questions from you guys out there, one at the back just over here?

Rhys - So, this is Rhys Edmunds from Comberton. How long roughly will a rover be out on the surface of Mars per day?

Paul - Well certainly at night, we don't go anywhere The rover is parked up at night because of the temperature drop, just to keep the rover alive. But in the day, it rather depends on whether we're at the science site or not. So, it might be drilling holes or we might be doing analysis, that sort of thing. But if it's a driving day, we expect to travel about 70 meters and that's all being done autonomously. Over that 218 days, we'll be doing about 4 km or so. So obviously, we're not driving every day, but that's typically what we have to design for.

Chris - When will this actually blast off heading for Mars?

Paul - We're due for launch in 2018 and the launch date is quite specific because you can't just go to Mars whenever you like.We have to wait for the planets to be in the correct alignment relative to each other, such that by the time you blast off and get to Mars orbit, Mars is there. So, that only happens every 2 and a bit years and that's why the first part of the mission goes in 2016 and the rover goes in 2018 because they have to wait for the next opportunity.

Chris - Curiosity came down in this incredible way where they actually had a platform that had thrusters that stopped the rover above the surface and then winched it down onto the surface of Mars. I mean, it was incredibly elegant. How are you going to land? Not like a Beagle hopefully.

Paul - Hopefully, not. But we will also be using powered decent because that's really the standard for rovers of this size. Partly because you want to have a nice gentle landing, but partly because you want to target the rover very, very carefully. I mean now, we have a reasonable idea of where we want to take samples. The rover has to be able to be directed to that place and can't just bounce around for miles. So, we essentially have a landing platform that has rocket motors in it, but the difference is, we're sat on top of it, not being winched down from it. We land the whole thing because the difference between us and the Curiosity mission is that we have instruments and so on, on the actual landing platform. So, we want to land them both in the same place, so we get a consistent data.

Kate - I'm Kate from Cambridge. I've got a question. How would the Mars rover have done on Robot Wars and does the panel think that more kids would get into robotics if the BBC brought it back? How else should kids get into robotics?

Paul - I don't think our rover would fair very well because it's very, very slow. There are rather different constraints when you are 250 million miles away. So actually, we would be severely outcassed I think on the Robot Wars arena. But yes, you're absolutely right. That sort of thing is a great inspiration for young people to go and explore engineering software. Something like a rover has so many different aspects of engineering that it's actually quite a good way to explore lots of different areas.

Ginny - Neil, do you think one of your arms would do quite well? Would it be able to sort of punch people?

Chris - In Robot Wars?

Ginny - In Robot Wars, yeah.

Neil - Quite possibly. I think it would do quite well. I always...

Chris - Pipette the opposition to death...

Neil - Yeah. When I was younger, I always fancied entering the competition. Maybe I'm not too old.

Ginny - Now, I've got a sort of more general question for all of you I guess is, Maybeline on twitter wants to know, could we power robots on something other than, well, I guess we use electricity. Could we use something like food to power robots?

Paul- Well, I can start by saying something which is quite important for planetary exploration of the future, particularly when we go so far away from outside the orbit of Mars where solar power is starting to become not a viable source. It may even be applicable for human missions, particularly with regards to getting something back without having to take lots of fuel with you to do that. One of the ideas is that you would actually mine methane and use that as your fuel to power your rocket or whatever to get back to Earth.

Neil - It's really a fracking mission.

Chris - Is that what the drill's for?

Paul - Not in this case, but...

Chris - Any other questions from everyone at home?

Ginny - So, Joe asks, "There's been this movie out recently 'Her'. I haven't seen it but I think it's about someone falling in love with one of the sort of voice recognition systems and we were wondering how important is it to have an emotional connection to a robot and sort of linked to that. What do you need to have that emotional connection? Do we need something that looks human, that sounds human?

Blaise - Yeah. I think it's actually very important. People like using robots that have more emotion. In some ways, that's possibly why people have been more interested in Siri which is from Apple then from Google's product which is called Google Now.  Because Siri has this kind of personality and you can ask it if Siri will marry you for example and...

Chris - What will she say?

Neil - I think usually, she says, no.

Chris - She's fussy then.

Neil - She's very fussy and I think has refused most of the population. I think she likes her own company. So, I think the things that we get affected by, are things like the voice. It's quite easy for us to hear emotion in a voice and that can have a very big impact. So, I think the more that you have, the more affinity you can attach to the robot. But I think it's not necessary that you have everything. So, you probably don't even need to have a face or a picture of a face. You can start developing some emotion for your robot/device even if it doesn't have a picture of it.

Chris - Do you give your robots names, Neil?

Neil - Yes, we do. For the projects, they'll all have names and they've had various themes of names over the years.

Chris - The diagnostic lab I work in, they've got things like Rob, Jane and Freddie and stuff like that. I mean, Scooby Doo. Everyone thinks it's quite funny, but it does actually kind of endear the staff to the machine they're using to do all these tests.

Neil - Yup and we've wasted many hours debating what we should call the systems in our robots.

Chris - And?

Neil - The debates continue.

Chris - Have you got a nickname for your robot, Paul?

Paul - Yeah, all the prototypes have names. So, we've got Brigitte, Bradley, Bruno, and Brian, are our four.

Chris - So, you favour B's then.

Paul - We do, yes. So, it started with Brigitte because our prototypes are referred to as bread boards, it's the correct engineering term for them. So they often get shortened to BB in the documentation. One of the members of the project team was a fan of Brigitte Bardot and so, it became known as Brigitte and we sort of followed on with the B's from there.

Ginny - That kind of leads on to - there's a question here from Jenny Lugo who wants to know if there are any sort of ethical issues around the use of robots and she quotes this article she read which had the line, "We had people interact with very cute baby robotic dinosaurs and then at the end of the workshop, we asked them to torture and kill them. They were pretty distressed by this."

Chris - I'm not surprised.

Ginny - So, does naming your robots and sort of giving them a personality, then mean you're going to be sad when they break or are you going to miss your robot that doesn't come back from Mars?

Paul - Yeah, I guess so. There's a very sad cartoon that does its rounds on the internet of Spirit asking whether it can come home or not, but yeah, it gives them a bit of a personality. And they do have personalities and it probably makes us care for them, I think.

Chris - The Chinese lander that was exploring the moon across the Christmas period developed a problem and it sent back a message and it said, "My masters can't manage to shut me down in time for the cold weather that's coming, so I might not wake up again in the morning. Goodbye." It got virally tweeted around the world because I think everyone felt very attached to this machine. Did people get attached to your lab machines?

Neil - I think attached is a strong word, but yeah, if you've been working on a system, on a robot for a year and you know it's a prototype, and at some point, it's going to go in the skip then you think you've put so much effort into making something and then once it's served its purpose to throw it away, that's sometimes tough.

Comments

Add a comment