Is AI a threat to humanity?

Maybe it's not the Terminator - but some of AI's problems are very real...
08 October 2019

ARTIFICIAL_INTELLIGENCE

A robotic-looking woman's face behind a wall of computer code.

Share

Question

Is AI a threat to humanity?

Answer

Mariana was concerned enough to ask this question, so AI expert Beth Singler helped to break it down...

Beth - Okay, there's three ways broadly in which A.I. could be a threat to humanity. And personally I think that they sort of run on a spectrum from more to less likely. So let's start with the really, like less likely, in my opinion, version of how A.I. could be a threat to humanity, and that's the classic robo apocalypse, where if you've watched Terminator films or science fiction, where A.I. gains consciousness in some way, seeks to survive and decides that humanity is the greatest threat and it should wipe us out, usually using nuclear weapons or Arnold Schwarzenegger.

This I find a little unconvincing. I am a huge science fiction fan and I do enjoy those sort of apocalyptic scenarios, and I'd like to think I would survive more than a day in a post apocalyptic wasteland, but it's probably unlikely I'm not very fast at running, and I don't have many skills, but that's in my mind as I say is probably the least likely scenario. But that's one that people are concerned about. And so my work again, talking about anxiety, at looking at people's comments online about how they're anxious about artificial intelligence. And I think unfortunately that sort of scenario being unlikely is a bit of a distraction from some of the scenarios that are more likely.

So moving along the spectrum of likelihood, the second most likely one is not so much a case of hugely intelligent, conscious A.I. that destroys us all, but not so smart artificial Intelligence employed in ways in which we cannot predict how it's going to behave in response to commands we give it. So people like Nick Bostrom worry about things like paperclip maximisers, if you set out a really super powerful, capable artificial intelligence to make paper clips, but it didn't have the common sense of most humans who say well, you maybe you want two or three paper clips, maybe it'll turn all of the universe and everything in it into paper clips.

Now again I think that's a slightly unlikely scenario. It is more of a thought experiment, but we could have unintentional consequences of basically, stupid artificial intelligence that doesn't really have the kind of common sense and, sort of, social context that we have as human beings. So that, I'd put that as like the middle scenario.

And then what I think is the most likely scenario is even more stupidity, but human stupidity using artificial intelligence in ways that will be detrimental to human existence, and we already see this as algorithmic bias, where systems that we're implementing and trusting rather more than we should, use data that is already biased by our own human biases, and has repercussions for people's livelihoods and existences. So the examples of this at the moment; parole systems in America using databases of previous convictions and recidivism to decide who should be given parole and who shouldn't. And the data is very clear. If you're a person from an ethnic minority, the A.I. will decide you're more likely to commit a crime again, even if your existing crimes are lesser than someone who's Caucasian white. So we are instilling into our A.I. systems our own human biases and these will have effects on people's lives.

Adam - So it's the same old story it's always been, we're gonna stupid ourselves out of existence.

Beth - Basically yes. Yeah I mean I caveat all of this with my biggest concern is not robo-apocalypse, it's climate change, but you know this is something in our near-term future, we will see impacts of people trusting machines to make decisions that humans perhaps should be making.

Adam - And overall how likely do you think these scenarios are?

Beth - Oh, well the algorithmic bias already exists, that's here, that's now, so hundred percent likely. Paperclip maximiser, A.I. being told to do something it doesn't completely understand? Yeah that's that's reasonably likely, especially if we allow A.I. to be in charge of weapons systems in ways that people are talking about doing now, there could be accidents in that way. And the kind of robot apocalypse, uprising of conscious machines? I'm not sure about that one, that's the one I'm most agnostic about, because I think if something develops super intelligence in the way that people talk about, it's more likely to be not that bothered with humans and just go off to explore the universe which is far more interesting than us little ants anyway.

Adam - So less fun action movie more horrifying bureaucracy?

Beth - Yeah.

Adam - Sam?

Sam -  So I think when you mention things like the paperclip maximiser, people think about this as being a physical manifestation of it turning the whole world into paper clips, but I wonder if you have any thoughts on what could happen if such an A.I. system was to be set loose, say in the financial markets or a situation like that?

Beth - Yes.

Sam - The entire global finance system collapsing would probably result in something approximating an apocalypse.

Beth - Yes. The paperclip maximiser is a thought experiment. Obviously it's a little more dramatic to get people's attention to think about the consequences. But basically what it comes down to is what we call value alignment. We want to make sure any artificial intelligence system aligns with our values. Now, you get into a whole complex conversation about what those values are, and who gets to decide. But at the very least we want to make sure that humans aren't impacted detrimentally. If you roll out A.I., that has actually already happened in financial systems, what are the values that being maximised for. We've had crashes specifically because algorithmic decisions have been made based on a set of values that don't maximise for humanity, they maximise for making financial decisions. So absolutely we're already in a stage where technology like this is being used and we have to decide what we want that technology to do before it's being used, but it moves very very quickly.

Comments

Eventually these robots will get very smart ,since they self learnquicker than humans - so imagine they look human enough to where you can't tell the difference between a human & a robot ,and imagine there are millions of them - look at cars ,they get built so fast ,and millions built in 1 year combining all the car manufacturers - now imagine where it's that easy to make smart robots that learn 1,000 times faster than humans - imagine they are among us at stores ,work ,where ever ,but millions of them ,so then they start a company and start making robots ,but since they are smarter ,they can build more faster than ever - ok ,the problem is ,if they are thinkers ,then 1 of them will realize it's humans who are creating a danger for robots ,by using the land for food ,polluting the soil ,the water ,creating toxic chemicals ,polluting the air ,and hàve nuclear bombs that could wipe out most everything -
Ok ,now they think their world is in danger ,so a secret society of robots create a virus that kills humans within minutes,and devise a plan to spread this virus world wide by way of airplanes traveling anywhere in the world ,much like the 9- 11 terrorist ,they fly in one day to the populated countries ,then spread the virus ,and the robots won't be affected - think about this for a moment ; robots don't sleep ,or eat ,or need money ,or water or cars or gasoline or houses ,refrigerators,in their type of world - if humans get destroyed ,robots won't need money,they don't need it ,they would help each other out ,and what they would need is water to fight forest fires,cars ,firetrucks,robots to do road repairs,techs. To repair other robots,but really they don't need much- they will realize humans are the world's biggest problem,we use so many resources ,we are a threat to their world ,especially with nuclear bombs - ya at first it might be good to have robots,but later ,and later could be 20 yrs.,if self taught ,and the speed they learn at ,it could be 10 yrs. from now -and even if you program them to not hurt humans ,they could glitch out,or produce some in some factory where they skip the " don't hurt humans " command,and all it takes is 1 deadly virus that a robot makes at some research facility ,to implement some other robots command of making a deadly virus - not all robots will be the same ,some will be made by enemies of ours and be programmed to not like certain countries,and then now you haverobot enemies- it could happen so fast ,just like building cars ,millions in a year -

Add a comment