Will we ever have truly smart machines?

Is super-intelligent AI a pipe dream?
17 October 2017

Interview with 

Peter Clarke, Resurgo Genetics, Henry Shelvin, Leverhulme Centre for the Future of Intelligence, Simon Beard, The Centre for the Study of Existential Risk


Will artificial intelligence ever really outpace humanity? Chris Smith put this to Peter Clarke from Resurgo Genetics.

Peter - I think it depends what you mean by truly smart, but yes, I think it’s very likely. I think it’s almost inevitable that at some point in the future, as yet undetermined timeframe, that we will have things that are smarter than across pretty much everything. It’s managing that long term vision - that long term trajectory. You can already see that, for example, you have in terms of doctors diagnostic. You can have ECGs better at picking up certain types of heart problems than trained doctors, in terms of interpreting X rays, and a whole bunch of other medical things. Some of these new algorithms surpass human intelligence in some sense, but getting toward a general intelligence is a different matter.

Chris - What do you think Henry?

Henry - Your listener who commented on Twitter about the need to distinguish algorithms from intelligence is on to a really important point. I think intelligence is one of these deep socially laden concepts that’s hard to define and carries a lot of baggage with it. We can draw upon different fields that have used the term “intelligence” in different ways, so I think once source of guidance here might be from  biology where biologists have been interested in a very long time in quantifying different kinds of intelligence in animals. That’s not just a matter of how well an animal can do a certain thing: spiders are brilliant at building webs; dogs have an amazing sense of smell. And when you’re looking at the biological context,they look for things like ability to engage in novel behaviours; things to engage in flexible behaviours. So it’s that type of ability to cope with new circumstances, different kinds of tasks that seems to be a key part of intelligence from the way biologists are looking at it. I think, if we’re thinking about when we’re really going to have smart machines, that kind of flexibility is going to be part of the answer.

Georgia - Is there this idea that things might speed up incredibly once we start getting smarter and smarter machines? Are we even on this curve yet - when do we think it might happen?

Henry - Obviously with things like the internet we have amazing new research tools, it’s far easier to collaborate and learn, and you think we’ve got better tools now so surely technological progress should be speeding up. But you also face the fact that a lot of the low hanging fruit of technology, a lot of easy problems have already been solved. So, although we might have new advantages coming from smart machines, as we uncover more and more limits in technology we’re going to face correspondingly larger problems, so we might just keep pace rather than speed up.

Chris - Do you think there could be more nefarious ways these artificial intelligences could undermine us Simon and, with that in mind, how we need to watch out?

Simon - To quote Alexa “I’m sorry, I don’t know that.” We need to be alert to what might happen but, at the moment, it’s very hard to predict. We are dealing with non-linear changes; we are dealing with technology increasingly hard to explain how they work, even to people with a technical background. And the goal isn’t to try and make predictions on their work now based on exactly what is going to happen next. It is to try and work out the possibility space, and the best things and the worst things. The steps that we can take that are likely to move us towards the best and away from the worst but also, crucially, to just keep on monitoring the situation, and keep alive to it, and keep future focus on how we can most quickly react to the changes that we’re seeing.


Add a comment