Programming human morals into driverless cars

10 July 2017

Interview with

Peter Cowley, Angel Investor

This week entrepreneur and angel investor Peter Cowley has been looking at a study considering how to programme morals into driverless cars. He explains to Chris Smith what they study sought to discover...

Peter - Yes, a very interesting study by a German academic, published in the Frontiers in Behavioural Neuroscience. But, if you don’t mind, I'm just going to back up slightly first in this car and just mention some of the facts and figures about death on the roads. Unfortunately, worldwide, there are about 1.5 million deaths from the roads a year, but the UK is actually one of the top 5 safest. We still have 1700 deaths a year though. And in many countries somewhere between 70 per cent and 90 per cent are due to human error. The studies have shown that that should drop by at least 90 per cent. So that’s down from 1700 to 170 or so in the UK.

Chris - Why will those lives be saved?

Peter - Those lives will be saved because the accidents wouldn’t happen. So cars will not hit each other, and cars will avoid in most cases – not all cases – hitting another human being out on a bicycle or whatever. So the pros for self-driving vehicles are a huge reduction of CO2 - mainly because the cars will be used 80 per cent of the time rather than 4 per cent of the time - huge financial savings because we won’t all need to own cars, huge amounts of space released for car parks, a great improvement in the ability of elderly, the blind, and disabled to be mobile, which will probably fit in with me because I'm 61 now - and at least it would probably be 10 to 20 years away, and a vast improvement in productivity. The cons are mainly going to be labour losses in the car manufacturers, in repair shops etcetera, and in insurance.

Chris - Now let’s talk then about the question of morals because the one thing you have mentioned is yes, they’ll be a lot safer but this still means that there is a blame game problem because, at the moment, if I get in my car and I have an accident, it’s clear that, if I've caused it, I'm to blame. If the other driver caused it, they're to blame. If you’ve got a computer driving your car, we have a problem!

Peter - That's correct. At some point – there's no doubt whatever – there will be deaths still on the road. At some point, somebody, some system somewhere, written by a human being has got to make a decision about what to hit. Now, in principle, I think it should be argued that the person in the car – the occupant of the car – even though they're not driving is – although maybe not to blame – is the person who possibly should be the most likely to be damaged. Bear in mind that, of course, you're inside a metal box which is very, very safe anyway.

Chris - What did they do in this present study?

Peter - In this study, a hundred or so people were asked to wear a virtual reality headset and they were seeing themselves driving along in a lane with a variety of inanimate objects – animals, humans:  so dogs, males, females... and it was foggy. The fog suddenly lifted and they were given between 1 and 4 seconds of time to make a decision about whether to go straight on, or whether veer off. If they went straight on, something would happen. Some people would die, including themselves possibly, and if they veered off, another set of people would die. The result was that the dog was the most valuable animal - more valuable than the cat! - children were more valuable than adults – not surprisingly - and females were very much more valuable than males!

Chris - In other words, the drivers are coming along and they're deciding - in a split-second - “I'm going to save the child to spare the adult...", or, "I'm going to save the female to spare the male...”

Peter - Very, very difficult in the 1 second. How much processing can you do? It’s got to be done on instinct then, hasn’t it? Four seconds: you might have just enough time to process it, but not in 1 second.

Chris - So that’s what a person would do. How do we code that into a computer, or do we want to?

Peter - Exactly. Can we just compare with medical ethics? NICE [the National Institute for Clinical Excellence] has to make decisions as well about whether to save a life or not. That’s done with huge amounts of time and huge amounts of data. In this situation, you’ve got a very short time. Now a computer has got a very much longer time, but it still can't make a decision whether - or should not make a decision - an adult, or maybe an animal, is of less value than a human. But a human is still of the ultimate value.

Chris - What are we actually going to do then, off the back of this study? What are they saying their conclusion is?

Peter - They’ve come up with 20 rules. My German is a bit rusty now - I've left about 35 years ago - but when I flipped through it, the rules basically are that autonomous systems in principle should be adopted if they're going to cause less accidents. That’s almost a given: the people must be protected. A very important one: the state defines the rules, not the technology company, not the car manufacturer. The state will make the rules. The system can distinguish by sex, age, or size, etc., and then there's a number of rules or some guidelines about their security ownership and data logging.

Chris - If you take those guidelines and ask, were those guidelines in place, would the computer be making different decisions with the humans, or the same one?

Peter - Well that’s the big question isn’t it, Chris? It could be done by wishing learning. If there was enough data out there, they could learn and then that would actually adopt the way the humans would behave. But in principle, it’s almost impossible to work out how they can make this decision which is why this has all been brought into the public domain so early to have debates.

Chris - But the thing is, it is really important because when we start making, not one but a million of these electric cars then the software that’s in one will be propagated into a million. And so, the decision that one makes will be the same decision a million makes so we have to get this right.

Peter - Yeah, I think it will because I think the different car manufacturers will have different algorithms. I don’t think there’ll be a global algorithm for this. But you're right. Sometimes needs to be gotten right.

Add a comment