Can you trust a robot soldier?

27 June 2017

Interview with

Emeritus Professor Noel Sharkey, University of Sheffield

One particularly controversial application of robots is in the military. Terminators, transformers and the like might make for good sci-fi movies, but what about the reality? What kinds of robots exist that could fight a war? And can we trust them? Katie Haylor spoke with Sheffield University’s Noel Sharkey...

Noel - There are many kinds really. Anything from combat robots being used in the air to attack people. There are robots on the ground that will help retrieve soldiers from the battlefield. There are a lot of bomb disposal robots that go after improvised explosive devices under cars and that sort of thing. Then there are robots being used for simple things like cleaning the hulls of battleships.

Katie - Which ones do you think are particularly novel?

Noel - The most novel are also the most dangerous. Lethal autonomous weapons systems will fly out or drive out, find their own targets and kill them without human intervention.

Katie - Is this sci-fi, is this stuff stuck in a laboratory or are these robots actually being used?

Noel - They’re not being deployed yet but they are certainly being tested largely by the United States, by Russia, by the UK, by China, by Israel, and some other countries that are more secretive about them, so it’s not long before they will be used. They could be used now some of them but they’re not using them because there’s a lot of debate as to whether it’s right to have machines that can make decisions, life and death decisions, without a human helping them.

Katie - Do we really know that a robot is going to do what you tell it to do, say in a battle situation?

Noel - The more autonomy you give a robot weapon - we’re talking here about tanks, submarines, aircraft, gunboats, all those kind of things. Traditionally military weapons, not terminators. The more they work on their own, the more they are likely to come upon unpredictable events. In a battlefield, there must be a very large number, if not infinite number of unanticipated actions. At that point you can’t have fully tested an autonomous robotics device and so it will cause accidents, it will be unpredictable.

Katie - Is there any case for trusting robots in military situations? Could a good relationship with a robot be good for soldier morale?

Noel - In very, very difficult, dangerous situations people tend to bond. And it turns out that people are bonding with remote controlled bomb disposal robots and that’s very dangerous because they love it so much, they take it to the droid hospital, they won’t take a new replacement, they want the same robot back. Soldiers then are prepared to give their life to save that robot because it’s their buddy. So you don’t want that kind of trust relationship, but I would say they trust them to find bombs for them.

There's a good case for trusting robots that will extract people from the battlefield. A battlefield extract assistive robot called the Vecna BEAR was being developed and that’s like a little forklift truck, it goes into the battlefield, picks up soldiers and brings them back. But the thing was that they found that soldiers didn’t really trust them because who wants to be picked up by a forklift truck - maybe it’s the enemy. But when you put a bear’s head on it, then people feel more comfortable it. They trust it more because it reminds them of their childhood. You’re lying there wounded in a sorry state and you kind of regress a bit, and you look up and you see this big bear coming to your rescue and you think... ah.

Katie - In a combat situation, aren’t robots more capable than humans? They’ve got better senses, they’re stronger - doesn’t this mean we should be trusting them more than a human soldier?

Noel - I think our perception is still well beyond anything you find in the military in any sense. We can deal with shadows in a way that machine vision can’t. Machine vision is nowhere near the capability of humans. They could be very good, for instance, for detecting a sniper, so you can use auditory information to quickly swing round and know the exact source of the sniper and fire a weapon straight at them. But snipers are the trickiest people on the battlefield. Accuracy is not everything, it’s being able to determine the legitimacy of your target. A human can have what you call human deliberation and they can look at a target in context. It’s also not that smart to shoot people sometimes when killing them could cause a riot and they come and get you.

There’s a really famous example in Iraq of soldiers because they had their machine guns raised. They’d trapped some insurgents in an alleyway and when they looked closely the saw that they were carrying a coffin. They took off their helmets and lowered their weapons out of respect. A robot will not be able to do that because it’s an unanticipated situation.

You can do all the testing in the world but what happens when one autonomous robot weapon meets another autonomous robot weapon? Or worse, a swarm of robot weapons on the border meets a swarm of other robot weapons. Well what happens? I don’t know and nor does anybody else on the planet. Because, if you take two algorithms and you don’t know the contents, and nobody but a fool would reveal the contents of their weapon’s algorithm, it’s totally unpredictable.

Katie - In the next five years, what is the most likely autonomous weapon to be used?

Noel - I would like to say none, because I’m part of a very large campaign of 60 NGOs, Human Rights Watch, the Nobel Laureates, Amnesty International, etc., and we’re working very hard at the UN, and getting headway to have any weapon prevented that does not allow effective human control or meaningful human control. There are some autonomous weapons around that get used all the time. You would really call them an automatic weapon and people can trust those more because what they do is they shoot down missiles that are coming at you.

Katie - But we’re not talking realistically about enormous humanoid robots roaming around the battlefield are we?

Noel - No, very definitely not. And they would be easy to shoot a couple of legs off because they’d stumble and fall over. We’d be much more likely to see a massive tank. The Russians are developing the Armati T14 tank and making it autonomous and that’s a sort of supertank. Nobody’s talking about a terminator robot. The Russians have made a terminator robot that fires a machine gun, but they’re not being terribly serious.


Comments

Add a comment