Driverless cars: are we ready?
There are so many reasons autonomous cars could change things - it could give independence to those who are unable to drive, massively reduce congestion and pollution, and also reduce traffic. But what legal and moral challenges are coming our way, and how far off are these cars from becoming a reality? Georgia Mills and Chris Smith explored the issues, starting with the technological trials we need to solve, with Oxford University's Paul Newman.
Paul - Imagine a computer was presented with an image. Now an image is literally a list of numbers.
Chris - What are those numbers?
Paul - Those numbers would be every pixel. Every small element of a picture is perhaps described by three numbers: how much red there is, how much green there is, and how much blue there is. So by writing down lists of three numbers, that's how I can code an image.
Now when we ask a machine to see, really what's happening at the fundamental level is that the machine is presented with a very, very epically long list of numbers in sets of three and we're asked to say - is that person about to cross the road?
Chris - My brain, Paul, knows how to do that, right. It does it very well. Your brain does it exceedingly well so there is a solution albeit a neuron network currently doing it in my brain. So, therefore, it must be feasible in some way to emulate this process?
Paul - Which is why I have this job. Because this is existence proof that you know, hedgehogs can do this. There is a school of thought that you look at how neural systems do it and you reverse engineer that. There's another system that you engineer it from the ground up. I'm in a luxurious position, I can believe in both.
And something different is happening now. So one of the big questions is why this excitement in machine learning and self driving cars now? What is it that's different? And I think there's a couple of things. Computer games helped us. There are things called GPUs which are Graphical Processing Units that we use to generate the amazing graphics that we see in games. And there was hardware that was developed for those machines that has made it possible to do a kind of computing that's very useful for machine learning.
Machine learning is, simply put, I'm going to give a computer a whole load of data - pictures, and I'm going to perhaps label in those pictures hedgehogs and baboons. And then the test is can that machine learn a method that when presented with another image, says "there's the baboon".
Chris - But what has that got to do with me wanting to have a car which I don't have to drive myself - I can sit in it and it will take me where I want to go?
Paul - Because we programme now machines to learn such that when presented with data about roads or cars, we don't sit down and programme a detector of cars, we label lots of data and we programme algorithms that have structure in them to allow them to learn themselves to programme a car. So we're not explicitly saying - if a red pixel next to an orange pixel, next to some chrome, above something that looks like a tyre. We would never do that because if a car point of view changed and you looked from the side, it's a different thing. It's not robust.
Chris - Now if I was talking to you as a chemical engineer and we were talking about some new drug we wanted to invent, and I ashed you how long before that's going into a patient? The stock answer is always five to ten years. So what's the stock answer for a self-driving car?
Paul - We'll start to see stuff that looks pretty compelling in around about five years, I think. In some places much before, but we have to be careful about really what is that system and what is it doing?
Georgia - There are so many reasons autonomous cars could change things - it could give independence to those who are unable to drive, it could massively reduce congestion and pollution, and also reduce accidents. But while the technology has a long way to go, there are other challenges lying ahead. For example - who's to blame if a driverless car crashes into you? Peter Lee is a lawyer and a CEO of Wavelength.LAW.
Peter - There's a myriad of related legal issues in the area of driverless cars - around privacy and around their integration onto the public highways, but also the question mark around who's liable if a driverless car crashes. And, at the moment, that still needs some work by governments and lawyers to figure that out in my view.
Some of the bigger manufacturers and companies out there, including Volvo, came out recently to say that they would be prepared to self-insure in the event of an accident. But then there's also other questions around how parliaments might legislate in this area in the near future.
Georgia - So I suppose you've got the person who owns the car, you've got the person who built the car, and you've got the people who coded the software that the car works?
Paul - Yes, that's right. So the issue in this area is in the event of an accident and the car is fully autonomous, it's not the same as if somebody was sitting behind the wheel and had been negligent. It's more complicated that and, currently, we would have to look at the chain of actors that were involved in that incident and try and apportion blame.
Georgia - And, I suppose, the issue here is that technology is moving faster than, I guess, society can keep up with. What do you think needs to happen for the world to be ready for all these technologies?
Paul - You're right, and this has happened throughout our history. Humans will push the boundaries with technology, and develop faster and faster. The law is often slower to react but, in some way, that's quite a good check and balance against technical advancement. What it shouldn't be though is a barrier. What it needs to do is protect society and protect the other individuals. It's all about creating a balance with advancing technology and existing laws.
Georgia - Peter Lee. And another thing to consider is that when you are allowing a connected system to pilot your car, this could leave it vulnerable to cyber attacks. Ross Anderson is Professor of security engineering at Cambridge University.
Ross - The biggest problem facing autonomous vehicles in terms of security is that once you start connecting complex computing devices to the internet, you have to think about security as well as safety.
In the modern connected world, once somebody discovers a combination of inputs that will cause your car to crash, they can then broadcast this to thousands of cars and, potentially, causing thousands of car crashes. And so, once a vulnerability like that is discovered and starts to be exploited, you have to patch it quickly and most car makers simply don't have any mechanism to go out and patch cars over the air the way that your laptop or your mobile phone gets patched.
Georgia - Have there been any examples so far of this kind of nefarious attacks happening?
Ross - Attacks have been demonstrated, for example, in a Chrysler Jeep in America by security researchers last year which cause Chrysler to recall and change the software in 1.4 million vehicles. What happened with the Jeep was that Chrysler used a network, which they called Uconnect, which didn't have an effective network security, and so it was possible for somebody on the network to scan all 1.4 million vehicles that used this and determined their IP addresses of their locations. So you looked at the latitude and longitude of the target vehicle and you could then integrate it, find out what it was, and then you could go in and interfere with the software.
Now the attackers then demonstrated this by taking over a Jeep in which a journalist was riding and deliberately slowing the Jeep down to the point that lorries behind him on the freeway were honking their horns and making him kind of scared.
Georgia - How do you think we can avoid this?
Ross - We're going to have to have some rules as to the safety and security certification of vehicles. This is going to be done in Europe because just as Europe makes the world's privacy rules, because Washington doesn't care and no other country's big enough to matter. So Europe is the main player in regulating the safety of all sorts of things from cars to medical devices to electrotechnical equipment. And so, fundamentally, this is a task facing the European Union to upgrade all the various agencies that are involved in inspecting various types of safety critical equipment, and issuing safety certifications for them to make sure that they're also sufficiently secure would withstand hostile attacks.
Georgia - Will it ever be possible to keep these systems perfectly safe?
Ross - Well that's an unrealistic ambition. What you actually do is you see to it when vulnerabilities come to light you've got the means to fix them quickly, and to roll out these fixes to all the millions of devices that you have in the field. That's what happens as an absolutely routine matter to all Windows kit on the second Tuesday of every month, and Android also updates your software in your phone every month. So this is just how the IT industry has evolved a way of coping with this. So there are mechanisms that work and what's needed is sellers of cars, sellers of medical devices, and so on, to get with it and figure out how to make these mechanisms work for their own products.
Georgia - And while this connectedness is what is making cars vulnerable, is also what could make them save lives. Back to Paul Newman.
Paul - I had a car accident five years ago now and I'm still haunted by it. I made a mistake - I was lucky that everyone was okay. I haven't been able to share that experience with any of my friends. That's not true about vehicles that can share their data and share their experiences.
So we can envision a world now where the accidents are always awful and yet, because of them, there are fewer of them because the mistakes and lessons learned get shared across the entire fleet. So you can imagine a world where a fender bender in Copenhagen actively improves someone's safety in a warehouse in Cape Town that afternoon.