The hidden dangers of AI

Are the dangers of AI much closer to home than we think?
18 December 2018

Interview with 

Vivienne Ming, SOCOS


AI or artificial intelligence is rarely out of the news at the moment, with all kinds of claims about how it will change the world: either by being a revolutionary technology making the world a better place, or that a super smart computer’s going to take over. So what are the risks of AI? Georgia Mills caught up with AI entrepreneur Vivienne Ming at the Royal Society’s “You and AI” event in London...

Vivienne - No one has invented a technology that thinks like we think. That understands the world. There is no AI that, given enough processing power, will have an opinion about Brexit or will prefer coffee over tea. Nothing like that exists. And in fact, anyone who says they know when it's coming is essentially saying they can predict when a truly novel invention, that no one yet has made, will happen. So maybe it will be tomorrow. I doubt it. And maybe it'll be 20 years and maybe it will never happen.

I don't think there's any theoretical reason to think we won't have very intelligent AI out there, someday, but it's coming no time soon and therefore it's not a technology that can take over the world. I think what we're genuinely afraid of is people and what people will do with immensely powerful tools in their hands. AI can truly do some terrifying things; autonomous weapons, the use of artificial intelligence by autocracies to maintain power. Those are things we really need to worry about.

Georgia - When people want to use machine learning or AI for problem solving, where can that go wrong?

Vivienne - I have been coming at artificial intelligence for a very long time from the perspective that I want to solve problems. The problem is the success of my work has come from a deep understanding of the problem. When I was the Chief Scientist of, perhaps, the very first company to ever use AI for hiring, the first thing I did was I read 100 hundred years - very literally - of research papers about what makes a great employee. Then we built AI to look for those qualities in people. By contrast many notorious cases, most recently Amazon, built a very complicated deep neural network and they threw it at the hiring history of Amazon. And guess what? It didn't want to hire women. It turns out that getting hired at Amazon tends to mean you're a man. That happens a lot together. So that AI learned to associate the two things.

That probably says something unpleasant about Amazon but it also says something about the naivety of turning over some of the most challenging problems in human history: Who should get a loan? Who should they hire? How do we take bias out of the judiciary? Who even gets into our countries?

All of these things put together are now being put in the hands of some very young people that have come out of university and they've learned to do something incredibly challenging. They have learned how to build and tune the meta parameters and deep neural networks and architect these elaborate models. But, in the end, all of artificial intelligence, as it exists today, is a tool. And that I think is one of the most immediate problems with artificial intelligence. Thinking it will solve our problems for us when, in fact, all it can ever do is reflect our own ethical choices back in our faces.

Georgia - Do you think it's going to make society a more or less fair place?

Vivienne - An interesting truth in my experience with almost all technology, not just artificial intelligence, is when it first comes out it invariably helps the people that need the least help. Because people like me with very fancy degrees, living in elite places, we’re the ones that can actually make use of it. This is true of the Internet. This is true of educational technology. Turns out it's immensely true of artificial intelligence. Artificial intelligence increases inequality. We tend to make tools that make life a little easier. And it turns out that the people that are able to make the most use of it are the people in large companies and, for them, making life a little easier is driving wages to zero.

And when they look at what an AI can do it can read a contract and find all the loopholes, it can take a spreadsheet and analyse all the risk in a financial investment, it can write code. It can do all these expert judgements that have, for up until this moment of time, been solely the domain of humanity and it can increasingly do them cheaper, faster and better than people can. And if you're the CFO of a Fortune 500 company, your first reaction to that is “Why the hell are we paying for all these software developers and lawyers and financial analysts? I can't fire them all but maybe I can replace them with people that never even went to university. Combining them with an AI, we can create something that is 80 percent as good as that very expensive college graduate we used to hire.”

We can do so much better than that. We can makes a choice that actually draws people into the creative economy instead of what I call de-professionalising. I hope we make those choices because 10 years from now it'll be too late to say, “Oh my goodness, that was a bad idea!”.


Add a comment