Professor Huw Price, University of Cambridge
Listen Now Download as mp3 from the show Will an artificially intelligent robot steal your job?
Signs that the Artificial Intelligence industry is booming recently are everywhere. For instance, Google snatched DeepMind, London-based AI startup, from Facebook for a rumoured $400m and is using a computer named Watson to analyse our health records. But world-leading thinkers like Elon Musk and Stephen Hawking have expressed fear about the growing industry of artificial intelligence. But why? How could a machine be a threat to us? Graihagh Jackson sat down with Huw Price to discuss...
Huw - We can find particular historic figures who were very clear that at some point in the future machines would be capable of thinking and, of course, a great one is Alan Turing. Another one is a Cambridge trained mathematician and statistician called I.J. Good who grew up in the East End of London, won a scholarship to Cambridge, did a PhD with D.H. Hardy, a great Cambridge mathematician and then went to Bletchley Park to work with Turing. He and Turing used to spend their evenings off talking about the future of machine intelligence and they were both convinced, even at that stage in the 40s, that one day machines would be smarter than we are.
Graihagh - Professor Huw Price, heís Bertrand Russell Philosopher at Cambridge UniversityÖ
Huw - Iím also involved with the new Leverhulme Centre for the Future of Intelligence which is focusing on the challenges of the long term future of artificial intelligence. People sometimes say, well we donít even know what intelligence is, how can we be studying the long term future of it and to that I like to say, well perhaps we should think not about what intelligence is but about what intelligence does. We know that in many ways weíre the most capable creatures on this planet, and I think itís a fair bet that as artificial intelligence develops, it will be capable of doing more and more of the things that we do, and in many cases doing them faster and better. And I think we can also be certain that it will do things that we simply canít do and things that we presently canít imagine.
Graihagh - ...and this is in part why artificial intelligence is so hard to pin down. The idea is that machines would be able to do think that would normally require human intelligence. Take the game Go. Itís an ancient Chinese game, with a simple concept - surround and occupy territory - but it requires a lot of abstract thinking and strategy to win but also intuition - all skills which we consider to be human. And yet Googleís machine called AlphaGo beat world champion Lee Sedol 4 -1 this month. This machine, like many other intelligent machines, works by using an algorithm or a set of rules which determine what move itíll make. A bit like a training manual, so if opponent occupies this square and this square, do this. But what makes AlphaGo intelligent is that once it understood the rules, it then started to teach itself how to play better. An algorithm in itself is not all that threatening but an algorithm that can teach itself raises some red flags Huw Price again...
Huw - Nick Bostramís example is the paper clip factory which has been programmed to produce paper clips but itís smart enough to recognise that it can produce more paper clips if it removes certain sorts of constraints, eventually, itís turning the whole planet, including us, into paper clips. So itís not malicious, itís just doing what itís been programmed to do. If it discovers that it can maximise on the variables weíve given it by changing one of these other ones, if itís smart enough thereís not much we can do about it. Thatís one of the things that leads to some of the challenges weíre going to face relatively soon, for example, thereís a very good case for thinking that many, many jobs will be threatened over the next generation or so. If we have one of these systems making important decisions for us, in many cases itís going to be important to find out why it made the decision.