Can you trust a robot life-saver?

Would you trust a robot in an emergency?
27 June 2017

Interview with 

Professor Kerstin Dautenhahn, University of Hertfordshire


How much can, or should, we be trusting robots in emergencies such as a fire for example? Georgia Mills quizzed Kerstin Dautenhahn, from the University of Hertfordshire, who’s urging caution towards trusting robots in general, following the publication of a study from Georgia Tech...

Kerstin - Researchers simulated an emergency scenario where people had first seen the robot in one condition where it behaved properly and the other one where it made mistakes. Then the researchers wanted to see if the participants would still trust that robot in an emergency situation. So they were creating artificial smoke using theatre smoke, the fire alarm rang, and then the question was would participants follow the robot’s suggestions? What they found is that regardless of whether people had experienced the robot previously as working properly or not working properly, they all followed the robot’s suggestion to go to what the robot proposed was an emergency exit. In fact it wasn’t, the robot was misleading them. What it basically shows is that people are quite trusting when it comes to robots even if they have seen it before not working properly.

Georgia - So people do seem to trust robots, perhaps a little too much in these emergency situations and Kirsten has found a similar trend in her own research…

Kerstin - We’ve done a study where we had different types of tasks that people had to do together with the robot and they had previously experienced the robot either as a faulty robot or as a correctly working robot. The number of unusual requests that the robot then asks people to execute was, for example, to pour orange juice into a plant, or to put a pile of unopened letters into the bin, or to access a friend’s laptop. In the case of throwing away unopened letters, 90% of the participants followed the advice. With regard to pouring orange juice into a plant, two thirds of participants followed the request. Regarding accessing a friend’s laptop, 100% of the participants followed the suggestions. Despite when the robot performed incorrectly they did say it is less trustworthy, but still the big majority followed the robot’s suggestion. What this generally shows is that there is this notion of trust, and sometimes overtrust.

Georgia - So whether you’re being pointed to the fire exits or being asked to pour orange juice on a plant, people do tend to follow the instructions of even comically malfunctioning robots, which could be a bit of a problem…

Kerstin - Robots are machines and similar to our laptops, and our smartphones, and our washing machines they will never work perfectly 100% of the time. Your laptop might crash, your mobile phone suddenly a particular software might not be running any more and you have to reboot. Regardless of how intelligent they might be, regardless of how interactive they might be, these are machines. So what we would ideally wish is that people are sensitive to these breakdowns and are able to recognise maybe I should be careful, maybe I should not follow it’s suggestions, maybe I should switch it off and ask the manufacturer to repair it.

Georgia - Or perhaps try turning it off and on again. But why are we putting so much of our trust into robots?

Kerstin - The physical presence of that robot, the fact that the robot is there with you sharing that particular space in time with you makes it much more a figure of authority as compared to for example a virtual agent on your computer screen or mobile phone. So it seems to be this physical embodiment that plays a role, but there are many more other things that need to be investigated, for example, the long term effect. Would you still trust a robot if over an extended period of time it has shown to be faulty? How would the nature of the mistakes that the robots make influence trust?

Georgia - Our trust of robots could be linked to their shape. For example, would you trust a robot more if it looked like a person?

Kerstin - There is a lot of research in terms of how do people perceive different types of robots. One of the disadvantages with using a humanoid robot is that people automatically generate very high expectations. They will assume that system has human-like capabilities, and human-like intelligence, and also common sense understanding of what happens in the world, and what it is to be human and that, of course, is not the case. We don’t have any machines yet with human level intelligence.


Add a comment