Geoff Hinton: His concerns about AI

And the ensuing media storm...
25 June 2024

Interview with 

Geoff Hinton

GEOFF HINTON 2.jpg

Geoff Hinton

Share

This episode of Titans of Science, Chris Smith spoke to AI pioneer Geoff Hinton about how AI works, and what we should be wary of...

Chris - You went on to join Google, what led to you saying, actually, I'm going to leave.

Geoff - I was working on how to make analog computers that would use a lot less energy than the digital computers we use for these chat bots. And while I was doing that, I came to the realisation that the digital computers were actually just better than the analog computers we have, like the brain. So for most of my life I was making these models on digital computers to try to understand the brain. And I'd assumed that as you made things more like the brain, they would work better. But there came a point when I realised that actually these things can do something the brain can't do and that makes them very powerful. And the thing that you can do that the brain can't do is you can have many identical copies of the same model. So on different computers, you simulate exactly the same neural network. And because it's a digital simulation, you can make it behave in exactly the same way. And now what you do is you make many, many different copies and one copy, you show one bit of the internet, another copy, you show another bit of the internet and each of the copies begins to learn for itself on its bit of the internet and decides how it would like to change its weights so that it gets better at understanding that bit of the internet. But now, once they've all figured out how they'd like to change their weights, you can tell all of them just to change their weights by the average of what all of them want to do. By doing that, you allow each of them to know what all the others learned. So now you could have thousands of different copies of the same model, which could look at thousands of different bits of the internet at the same time. And every copy could benefit from what all the other copies learned. So that's much better than what we can do. What you have to do is produce sentences and I have to figure out how to change my connection strengths. So I would've likely produced those sentences. And it's a slow, painful business called education. These things don't need education in that sense. These things can share knowledge incredibly efficiently and with much higher bandwidth than we can.

Chris - So what led to you deciding that it was time to call time at Google, then?

Geoff - People have sort of the wrong story. The media loves to make a nice story and a nice story would've been, I got very upset about the dangers of AI and that's why I left Google. It wasn't really that I was 75, it was time to retire. I wasn't as good at doing research as I had been, and I wanted to take things easy and watch a lot of Netflix, but I thought I'd take the opportunity just to warn about the dangers of AI. And so I talked to a New York Times journalist and warned about the dangers of AI and then all hell broke loose. I was very surprised at how big a reaction there was.

Chris - Were you really?

Geoff - Yes. I didn't expect there to be this enormous reaction. And I think what happened is, you know how when the huge wave comes, there's a whole bunch of surfers out there who'd like to catch the wave, and one particular surfer just happens to be paddling just the right time. So he catches the wave. But if you ask why it was that surfer, it was just luck. And I think lots of people have warned about the dangers of AI, but I happened to warn a bit at just the time it became something of intense interest. And I happened to have a good reputation from all the research I'd done. And so I was kind of the straw that broke the camel's back. But there were a whole lot of other straws there.

Chris - Indeed. I think it also has a lot to do with who is saying it, doesn't it? Because if you've got somebody who's a journalist saying, well, I've heard a few people say this. It has very different expectations than if someone like yourself who's devoted their career to it and been very successful and been a pioneer says there are concerns, then people are going to take that a lot more seriously. But what do you think those main concerns are?

Geoff - Okay, so there's a whole bunch of different concerns. And what I went public with was what's called the existential threat. And I went public with that because many people were saying, this is just silly science fiction, it's never going to happen. It's stupid. It's science fiction. And that's the threat that these things will get more intelligent than us and take over. And I wanted to point out these things are very like us, and once they get smarter than us, we don't know what's going to happen, but we should think hard about whether they might take over and we should do what we can to prevent that happening. Now there's all sorts of other risks that are more immediate. The most immediate risk is what's going to happen in elections this year because we've got all this generative AI now that can make up very good fake videos and fake voices and fake images, and it could really corrupt democracy. There's another thing that's happened already, which is that the big companies like Facebook and YouTube use techniques for getting you to click on things. And the main technique they use is they'll show you something that's even more extreme than what you just watched. And so that's caused a big polarisation of society and there's no sort of concept of agreed truth anymore. And each group keeps getting reinforced. Because things like Facebook and YouTube show them things they like to see. So everybody loves to get indignant. It turns out. And if you were to tell me there's this video of Trump doing something really bad, I would of course click on it so I could see what it was. That's really terrible for society. It means you get polarised into these different groups that don't talk to each other, and I don't think that's going to end well.

Chris - It's sort of amplified the echo chamber hasn't it? The one thing that I thought you might lead with when answering that question though, was the thing that immediately spun to my mind as a big concern, which is working in science, which is an evidence-based discipline where we take the weight of evidence to decide whether we're on the right or the wrong path. If you have systems which are confabulating, they are potentially polluting the knowledge space with confabulation, which has the effect potentially of leading us totally down the wrong path because it adds veracity and authenticity to things that are actually completely wrong. And it could end up with us mis-learning lots of things, couldn't it?

Geoff - Yes, it could. Or of course scientists already do that themselves. The scientists who just make things up. And in particular when it comes to sort of theory. So Chomsky for example, managed to convince lots and lots of linguists that language is not learned. It's pretty obvious that language is learned, but he managed to convince them it wasn't. So it's not just chatbots that can mislead people.

Chris - What do you think that the biggest benefits are going to be though, and in what sorts of time scales?

Geoff - Well, I think there's going to be huge benefits. And because of these huge benefits, we are not going to stop developing these things further. If there were only these risks, maybe we could agree to just stop doing it. But there's going to be huge benefits in areas like medicine where everybody's going to be able to get much better diagnoses.

Chris - And any really speculative, adventurous kind of out there. Thoughts about what it might enable us to do in the future?

Geoff - More or less anything <laugh>. I mean there's a view of what progress of humanity is that says that it consists of removing limitations. So when we were hunter gatherers, we had the limitation, you have to find food every few days. As soon as you start farming, you can store food and you don't have to find it every few days, you're living in the same place so you can store food. So that got rid of a huge constraint. Then there's a constraint that you can't travel very far because you have to walk or ride a horse. Well transport like trains and bicycles and cars and planes eventually got rid of that constraint. Then there's a constraint that we've got limited strength. And then the industrial revolution came, and then we got machines that were much stronger and human strength ceased to be something of much value. Now what's happening with these big chatbots is our routine intellectual abilities are being surpassed by them or soon will be. So all of those jobs that just need a reasonably intelligent person to do them are in danger of being done by these chatbots, and that's kind of terrifying.

Comments

Add a comment