Titans of Science: Geoff Hinton

The pioneer of AI shares his insights into its past, present, and future...
25 June 2024
Presented by Chris Smith
Production by Rhys James.

GEOFF HINTON.jpg

Geoff Hinton

Share

This episode of The Naked Scientists marks the return of a brand news series of Titans of Science, where some of the movers and shakers of the scientific and technological world help us to unpick a big problem. Kicking us off is the AI pioneer Geoffrey Hinton, with a fascinating insight into artificial intelligence, how it actually works and what we need to be wary of...

In this episode

Artificial intelligence (AI)

Geoff Hinton: How does AI work?
Geoff Hinton

This episode of Titans of Science, Chris Smith spoke to AI pioneer Geoff Hinton about how AI works, and what we should be wary of...

Chris - Geoffrey Hinton was born on the 6th of December, 1947 in Wimbledon. He's the great, great grandson of the mathematician and educator, Mary Everest Boole and her husband, the Logician George Boole. Other notable family members include the surgeon and author James Hinton and the mathematician Charles Howard Hinton. Geoffrey attended Clifton College in Bristol before embarking on an array of undergraduate studies at Cambridge that took in philosophy and natural sciences. He graduated with a degree in experimental psychology in 1970. He continued his studies at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978. He also worked at the University of Sussex, the University of California San Diego, and he was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London. Geoffrey is frequently described as the godfather of AI and his pioneering research on neural networks and deep learning has paved the way for systems that are familiar to many of us now. Like ChatGPT. He warned of the dangerous post by AI when he resigned from Google last year. Geoffrey, let's start with where you grew up. Can you tell us a bit about your childhood in London and in Bristol? Were computers part of that then?

Geoff - No. I grew up in Bristol and computers were pretty much unheard of.

Chris - So why did you decide to pursue that?

Geoff - So when I was an undergraduate at Cambridge, I got very interested in how the brain works and there's two different ways you could study how the brain works. You could do experiments on the brain or you could try and make computer models of how it works. So if you've got a theory of how something works, you can test out the theory by seeing if you can write a computer program that behaves according to the theory and actually works. And it turns out most of the early theories of how the brain works don't actually work if you simulate them on a computer.

Chris - Did that strike you as odd at the time then? Is that what set you on the path to trying to understand better why the brain is different to the logic of a computer?

Geoff - So at the time, the idea of doing science by simulating things on computers was fairly new and it seemed just like the right approach to trying to understand how the brain learns or at least a complimentary approach to doing experiments on the brain.

Chris - And so what did you do?

Geoff - So I spent my time, when I was a graduate student in Edinburgh, writing computer programs that pretended to be networks of brain cells and trying to answer the question, how should the connections between brain cells change? So the collection of brain cells hooked up in a network can learn to do complicated things like, for example, recognise an object in an image or recognise a word in speech or understand a natural language sentence.

Chris - Do we have a clear idea even today of how that works? Because obviously you were working towards something we had no idea about and trying to model it. Have we got there or are we still in the dark?

Geoff - Neither of those. We haven't fully got there, but we have a much better understanding. So we now have computer models of neural networks, things that run on a computer but pretend their networks are brain cells that work really well. You see that in these large language models and in the fact that your cell phone can recognise objects now it can also recognise speech. So we understand how to make things like that work and we understand that the brains quite like many of those things. We are not quite sure exactly how the brain learns, but we have a much better idea of what it is that it learns. It learns to behave like one of these big neural networks.

Chris - If it's down to the fact that we've got brain cells talking to brain cells and they're just big populations of connections, is that not relatively easy to model with a computer? What's the holdup? Why is it hard to do this? Well,

Geoff - The tricky thing is coming up with the rule about how the strength of a connection should change as a result of the experience the network gets. So for example, very early on in the 1940s or maybe early 1950s, a psychologist called Hebb had the idea that if two neurons, two brain cells fire at the same time, then the connection between them will get stronger. If you try and simulate that on a computer, you discover that all the connections get much too strong and the whole thing blows up, you have to have some way of making them weaker too.

Chris - I love that line, 'What fires together, wires together.' It's never left me. Because I remember reading Hebb's book when I was at University College London. So how did you try and address that then? Was it sort of just a damping problem? You make it so that the nerve cells get bored more easily as it were so that doesn't overheat in the way that the computer would otherwise have them do?

Geoff - Well that's kind of the first thing you think of and you try that and it still doesn't work very well. So the problem is can you get it to work well enough so that it can do complicated things like recognise an object in an image or in the old days recognise something like a handwritten digit. So you take lots of examples of twos and threes and so on and you see if you can make it recognise which is a two and which is a three. And it turns out that's quite tricky. And you try various different learning rules to discover which ones work and then you learn a lot more about what works and what doesn't work.

Chris - What does and doesn't work and why?

Geoff - Okay, I'll tell you something that does work because that's obviously more interesting. You have a layer of neurons that pretend to be the pic cells. So an image consists of a whole bunch of pixels and the pixels have different brightnesses and that's what an image is. It's just numbers that say how bright each pixel is. And so that's the input neurons. They're telling you the brightness of pixels and then you have output neurons. If you're recognising digits, you might have 10 output neurons and they're telling you which digit it is. And typically the network to at least to begin with wouldn't be sure. So it hedges its bets and it'd say it's probably a two, it might just be a three, it's certainly not a four. And it would represent that the output unit for a two would be fairly active. The output unit for a three would be a little bit active and the output unit for a four would be completely silent. And now the question is how do you get those pixels as inputs to cause those activities in the outputs? And here's a way to do it that all the big neural networks now use. So this is the same algorithm that is used to train big chatbots like GPT-4. It's used to train the things that recognise objects and images and it's called back propagation. And it works like this. You have some layers of neurons between the inputs and the outputs. So the neurons that represent the pixel intensities have connections to the first hidden layer and then the second hidden layer and then the third hidden layer and finally to the outputs. So they're called hidden layers because you don't know to begin with what they should be doing. And you start off with just random connections in these networks. So the network obviously doesn't do anything sensible. And when you put in an image of a digit, it will typically hedge its bets across all the possible 10 digits and say they're all more or less equally lightly because it hasn't got a clue what's going on. And then you ask the following question, how could I change one of the strengths of the connections between a neuron in one layer and a neuron in another layer so that it gets a little bit better at getting the right answer? So suppose you're just trying to tell the difference between twos and threes. To begin with, you give it a two and it says 'with a probability 0.5, it's a two with a probability 0.5, it's a three.' It's hedging its bets. And you ask, well how could I change connection strength so that it would say 51% two and 49% three. And you can imagine doing that by just tinkering with the connections. You could choose one of the connection strengths in the network and you can make it a little bit stronger and see if that makes the network work better or work worse. If it makes it work worse. Obviously you make that connection a little bit weaker and that's sort of a bit like evolution. You're taking one of these underlying variables, a connection strength, and you're saying if I change it a little bit, how can I change it to make things work better and save those changes? And you could do that and it's obvious that in the end that will work, but it would take huge amounts of time. So in the early days we would use networks that had thousands of connections. Now these big chatbots have trillions of connections and it would just take forever to train it that way. But you can achieve pretty much the same thing by this algorithm called back propagation. So what you do is you put in an image, let's say it's a two. The weights are initially random, the weights on the connections. So information will flow forward through the network and it'll say 50% is a two and 50% is a three. And now you send a message back through the network and the message you send back is really saying, 'I'd like you to make it more likely to be a two and less likely to be a three. So I'd like you to raise the percentages on two and lower the percentages on three.' And if you send the message back in the right way, you can figure out for all the connections at the same time how to change them a little bit so the answer is a little bit more correct. That's called back propagation. It uses calculus, but it's essentially doing this tinkering with connection strengths that evolution would do by just changing one at a time. But the back propagation algorithm can figure out for all of them at the same time how to change each one a tiny bit to make things work better. And so if you have a trillion connections, that's a trillion times more efficient than just changing one and seeing what happens.

Chris - But how does the layer at the bottom know what's going to be changed above it to make sure that the input that it then gets is the right one so that the change it's just made to it, and its probability, ends up being even better so that you don't end up changing yourself. Then that feeds forward, back up, the network changes something else, but then it becomes less optimal for you if you get what I'm saying.

Geoff - I get just what you're saying. It's a very good question. And essentially what's happening is, if you take a connection early in the network, it's kind of making an assumption. It's saying suppose all the other connections stayed the same, how would changing my connection strength make things better? So it's assuming all the other ones stayed the same. And then it's saying if I change my connection strength, how would it make things better? And they're all doing that. So if you change the connection strengths by a lot, things could actually get worse. because you could choose a way to change each connection strength that if you did that change alone would make things better. But when you do all the changes at the same time, it makes things worse. But it turns out if you make the changes very small, that problem goes away. If you make the changes very small, then I figure out how to change one connection strength. And because the changes in all the other connection strengths are very small, it's very unlikely they'll turn, for example, a change that helps into a change that hurts.

A stylised computer network.

Geoff Hinton: Why does AI get things wrong?
Geoff Hinton

This episode of Titans of Science, Chris Smith spoke to AI pioneer Geoff Hinton about how AI works, and what we should be wary of...

Chris - Can those individual layers tell us what they 'think' though? Because one of the problems that researchers when I go and talk to them say to me is that they would very much like to know how when they build these sorts of systems, it's arriving at its conclusion. It's so-called explainable. So when it sees a picture of cancer having been trained to recognise cancers, it can explain what particular features of the picture it saw singled out those cells as cancerous. And some models do this but others don't. Now is the way that they do it by those things, being able to tell you what they changed in order to make the output that they got.

Geoff - It's not so much to tell you what they changed, but tell you how they work. So for example, if you take the layer of neurons that receives input from the pixels, let's suppose we're trying to tell the difference between a two and a three. You might discover that one of those neurons in that layer is looking for a row of bright pixels that's horizontal, near the bottom of the image with a row of dark pixels underneath it and a row of dark pixels above it. And it does that by having big positive connection strengths to the row of pixels that it wants to be bright and big. Negative connection strengths to the row of pixel cells. It wants to be dark. And if you wind it up like that, or rather if it had learned to wire itself up like that, then it would be very good at detecting a horizontal line. That feature might be a very good way to tell the difference between a two and a three because twos tend to have a horizontal line at the bottom and threes don't. So that's fine for the first hidden layer, the first layer of feature detectors. But once you start getting deeper in the network, it's very, very hard to figure out how it's actually working. And there's a lot of research on this, but in my opinion, it's going to be very, very difficult to ever give a realistic explanation of why one of these deep networks with lots of layers makes the decisions it makes

Chris - Is the explanation you've given me for how this works pretty generic. So if I took any of these models, they're probably working in a similar sort of way. And if so, when someone says 'I'm working on AI', given that we have that sort of platform, what are they actually working on? How are we trying to change, improve, or develop AI away from that main principle, that core fundamental operating algorithm that you've described for us?

Geoff - On the whole, we're not trying to develop it away from that algorithm. Some people do, but on the whole, what we're trying to do is design architectures that can use that algorithm to work very well. So let me give you an example in natural language understanding. In about 2014, neural networks suddenly became quite good at translating from one language to another. So you'd give them as inputs a string of English words and you'd want them as outputs to produce a string of French words. In particular, given some string of English words. You'd like them to produce the first French word in the sentence, and then given a string of English words plus the first French word in the sentence, you'd like them to produce the second French word in the sentence. So they're always trying to predict the next word. And you train them up on lots of pairs of English and French sentences. And to begin with in 2014, when you're trying to figure out the next word, you'd have influences from all the previous words. And then people discovered a bit later on that rather than letting all the previous words influence you equally, what you should do is look at previous words that are quite similar to you and let them influence you more. And so you are not trying to get rid of the basic algorithm or circumvent it, you're trying to figure out how to supplement it by wiring in certain things like attention that make it work better.

Chris - When we hear that one problem with the large language models that we are seeing manifest very much now is that they can hallucinate, where does that behaviour come from? How do they generate these spurious things that don't exist, but they're said with enormous authority in the outputs from these sorts of engines? Where does that come from?

Geoff - So first, let me make a correction. It ought to be called confabulation, not hallucination. When you do it with language, it's called confabulation. And this was studied a lot by people in the 1930s. And the first thing to realise is that this makes them more like people, not less like people. So if you take a person and you ask them to remember something that happened quite a long time ago, they will, with great confidence, tell you a lot of details that are just wrong. And that's very typical of human memory. That's not exceptional at all. That's how human memory is. And that's why if you're ever on a jury, you should be very suspicious. When people remember things, they often remember things wrong. So the big chatbots are just like people in that respect. And the reason they're like that, and the reason people are like that, is you don't actually store things literally. We used to have a computer memory where you could take, for example, a string of words, then you could store it in the computer memory, and later you could go and retrieve that string of words and you get exactly the right string of words. That's not what happens in these big chatbots. What the big chatbots do is they look at strings of words and they're trying to change the weights in the network so that they can predict the next word. And all of the knowledge they have of all the strings of words they've seen is in the weights on those connections. And when you get them to recall something, what they're really doing is regenerating it just as you are with people. And so they're always constructing these memories and there's actually no difference between a real memory and a fake memory except the one's right from the point of view of the person constructing it. You don't know which is real and which is fake. You just say what seems plausible to you. And the chatbots do the same. Now the chatbots are worse than people at confabulating, but they're getting better.

AI diagnosis

Geoff Hinton: Why do we trust AI?
Geoff Hinton

This episode of Titans of Science, Chris Smith spoke to AI pioneer Geoff Hinton about how AI works, and what we should be wary of...

Chris - The worry to me is that we regard what people say with a pinch of salt, some more than others. But we tend to have this enormous trust that we place in machines because they behave in a perfect way to our mind. And we are now using machines that behave more like people and have people's flaws in some respects as you've just been outlining to us. So are we going to have to educate people not to think about machines as quite so reliable in future?

Geoff - Yes. What we've produced in these big chatbots is like a new species that's very like us and very unlike a normal computer program, we have to learn not to treat the chatbots. Like you would've treated an old-fashioned computer program where you could rely on it. You can't.

Chris - When we were talking earlier, you said you started using computers to understand how the brain worked. But it strikes me that we are now at a position where computers and things like you've been describing are showing us how nature works. It's almost like the loop is closing.

Geoff - Yes. I mean I think we've understood a lot more about language from producing these big chatbots. So in the old days people like Chomsky said that language was innate, it wasn't learned. Well that's become a lot less plausible because these chatbots just start off with random weights and they learn to speak very good English just by looking at strings of English and learning. It told us a lot about how we work. We work very much like them, so we shouldn't be trusting them any more than you trust a person. We should probably trust them less than you trust a person.

Chris - When did you get this name of being the godfather of AI? Because we jumped straight into the hard stuff with our conversation. How did we get to the position we are in today? Because a lot of people suddenly think AI has arrived on the scene here and now, but you got your PhD in it not long after I was born. So what has happened in the last 40 something years and what's been going on in the background and what was your role in being so instrumental in it?

Geoff - Let me give you an analogy because there's another thing that happened in science that is actually quite similar. So in the 1910s or 1920s, someone who studied climate called Wegener decided that the continents had drifted around and that it wasn't just a coincidence that that bulge on South America fitted nicely into the armpit of Africa. They actually had been together and they came apart. And for about 50 years, geologists said, 'this is nonsense. Continent can't drift around. It's complete rubbish.' And Wegener didn't live long enough to see his theory vindicated. But in I think the 1960s or sometime like that, they discovered in the middle of the Atlantic there's this stuff bubbling up where the continents are moving apart and it's creating new stuff. And suddenly the geologist switched and said, 'oh, he was right all along.' Now with neural nets, something similar has happened. So back in the early days of neural nets, there were two kinds of theories of how you could get an intelligent system. One was you could have a big network of neurons with random connections in and it could learn the connection strengths from data. And nobody quite knew how to do that. And the other was, it was like logic. You had some internal language sort of like cleaned up English and you had rules for how you manipulated expressions in cleaned up English. And you could derive new conclusions from premises by applying these rules. So if I say Socrates is a man, and I say all men are mortal, I can infer that Socrates is mortal. That's logic. And most people doing AI, in fact almost everybody after a while, thought that that's a good model of intelligence. It turned out it wasn't. The neural nets were a much better model of intelligence, but it was just wildly implausible to most people. So if you'd asked somebody even 20 years ago, if you'd asked them, could you take a neural network with random initial connections and just show it lots and lots of data and have it learn to speak really good English, people would've said, 'no, you're completely crazy. That's never going to happen. It has to have innate knowledge and it has to have some kind of built in logic.' Well, they were just wrong.

Chris - When we hear that people put safeguards around AI, then is that where you've got a sort of barrier that it rubs up against as in you, you've got the freedom and the control of its connections in the way that you've been explaining. But when we want to say to it, 'no, I don't want you to invent black Nazis,' which is the problem we had before, it was coming up with all kinds of generated images, there was an example shown that the images were completely historically inappropriate or implausible and now that's been fixed, or allegedly has been fixed. How do you then lean on your system so that it doesn't make silly mistakes like that?

Geoff - You first train up a system on a lot of data and unless you've cleaned the data very carefully, the data contains unfortunate things. People then try and train it to overcome those biases. Sometimes they get a bit over enthusiastic. And one way to do that is you hire a bunch of people who get your chatbot to do things and then the people tell you when the chatbot does something wrong or the chatbot maybe makes two different responses and people tell you which is the preferable response. And now you train the chatbot a bit more. So it makes the preferable responses and doesn't make the other responses. And that's called human reinforcement learning. Unfortunately, it's often easy to get around that extra training. If you release to the public the weights of the neural network, then people can train it to overcome all that human reinforcement learning and start behaving in a racist way again.

Chris - But why is the system not bright enough, I'm using that word carefully and in inverted commas, to know that it's getting it wrong? Why does it not then go, 'well, hang on a minute there. There wasn't a particular group represented that I'm showing here, so that must be wrong. I'll correct that.' Why does it not self-correct?

Geoff - So probably initially before Google put a lot of work into getting it to be less biased, it wouldn't have produced black Nazis. But Google put lots of work into making it what it thought was less prejudiced, and as a result it started producing black Nazis. That's unfortunate, but you have to remember when it's producing a picture of a Nazi, it's not actually remembering a particular Nazi. It's just saying what it finds plausible.

Geoff Hinton

Geoff Hinton: His concerns about AI
Geoff Hinton

This episode of Titans of Science, Chris Smith spoke to AI pioneer Geoff Hinton about how AI works, and what we should be wary of...

Chris - You went on to join Google, what led to you saying, actually, I'm going to leave.

Geoff - I was working on how to make analog computers that would use a lot less energy than the digital computers we use for these chat bots. And while I was doing that, I came to the realisation that the digital computers were actually just better than the analog computers we have, like the brain. So for most of my life I was making these models on digital computers to try to understand the brain. And I'd assumed that as you made things more like the brain, they would work better. But there came a point when I realised that actually these things can do something the brain can't do and that makes them very powerful. And the thing that you can do that the brain can't do is you can have many identical copies of the same model. So on different computers, you simulate exactly the same neural network. And because it's a digital simulation, you can make it behave in exactly the same way. And now what you do is you make many, many different copies and one copy, you show one bit of the internet, another copy, you show another bit of the internet and each of the copies begins to learn for itself on its bit of the internet and decides how it would like to change its weights so that it gets better at understanding that bit of the internet. But now, once they've all figured out how they'd like to change their weights, you can tell all of them just to change their weights by the average of what all of them want to do. By doing that, you allow each of them to know what all the others learned. So now you could have thousands of different copies of the same model, which could look at thousands of different bits of the internet at the same time. And every copy could benefit from what all the other copies learned. So that's much better than what we can do. What you have to do is produce sentences and I have to figure out how to change my connection strengths. So I would've likely produced those sentences. And it's a slow, painful business called education. These things don't need education in that sense. These things can share knowledge incredibly efficiently and with much higher bandwidth than we can.

Chris - So what led to you deciding that it was time to call time at Google, then?

Geoff - People have sort of the wrong story. The media loves to make a nice story and a nice story would've been, I got very upset about the dangers of AI and that's why I left Google. It wasn't really that I was 75, it was time to retire. I wasn't as good at doing research as I had been, and I wanted to take things easy and watch a lot of Netflix, but I thought I'd take the opportunity just to warn about the dangers of AI. And so I talked to a New York Times journalist and warned about the dangers of AI and then all hell broke loose. I was very surprised at how big a reaction there was.

Chris - Were you really?

Geoff - Yes. I didn't expect there to be this enormous reaction. And I think what happened is, you know how when the huge wave comes, there's a whole bunch of surfers out there who'd like to catch the wave, and one particular surfer just happens to be paddling just the right time. So he catches the wave. But if you ask why it was that surfer, it was just luck. And I think lots of people have warned about the dangers of AI, but I happened to warn a bit at just the time it became something of intense interest. And I happened to have a good reputation from all the research I'd done. And so I was kind of the straw that broke the camel's back. But there were a whole lot of other straws there.

Chris - Indeed. I think it also has a lot to do with who is saying it, doesn't it? Because if you've got somebody who's a journalist saying, well, I've heard a few people say this. It has very different expectations than if someone like yourself who's devoted their career to it and been very successful and been a pioneer says there are concerns, then people are going to take that a lot more seriously. But what do you think those main concerns are?

Geoff - Okay, so there's a whole bunch of different concerns. And what I went public with was what's called the existential threat. And I went public with that because many people were saying, this is just silly science fiction, it's never going to happen. It's stupid. It's science fiction. And that's the threat that these things will get more intelligent than us and take over. And I wanted to point out these things are very like us, and once they get smarter than us, we don't know what's going to happen, but we should think hard about whether they might take over and we should do what we can to prevent that happening. Now there's all sorts of other risks that are more immediate. The most immediate risk is what's going to happen in elections this year because we've got all this generative AI now that can make up very good fake videos and fake voices and fake images, and it could really corrupt democracy. There's another thing that's happened already, which is that the big companies like Facebook and YouTube use techniques for getting you to click on things. And the main technique they use is they'll show you something that's even more extreme than what you just watched. And so that's caused a big polarisation of society and there's no sort of concept of agreed truth anymore. And each group keeps getting reinforced. Because things like Facebook and YouTube show them things they like to see. So everybody loves to get indignant. It turns out. And if you were to tell me there's this video of Trump doing something really bad, I would of course click on it so I could see what it was. That's really terrible for society. It means you get polarised into these different groups that don't talk to each other, and I don't think that's going to end well.

Chris - It's sort of amplified the echo chamber hasn't it? The one thing that I thought you might lead with when answering that question though, was the thing that immediately spun to my mind as a big concern, which is working in science, which is an evidence-based discipline where we take the weight of evidence to decide whether we're on the right or the wrong path. If you have systems which are confabulating, they are potentially polluting the knowledge space with confabulation, which has the effect potentially of leading us totally down the wrong path because it adds veracity and authenticity to things that are actually completely wrong. And it could end up with us mis-learning lots of things, couldn't it?

Geoff - Yes, it could. Or of course scientists already do that themselves. The scientists who just make things up. And in particular when it comes to sort of theory. So Chomsky for example, managed to convince lots and lots of linguists that language is not learned. It's pretty obvious that language is learned, but he managed to convince them it wasn't. So it's not just chatbots that can mislead people.

Chris - What do you think that the biggest benefits are going to be though, and in what sorts of time scales?

Geoff - Well, I think there's going to be huge benefits. And because of these huge benefits, we are not going to stop developing these things further. If there were only these risks, maybe we could agree to just stop doing it. But there's going to be huge benefits in areas like medicine where everybody's going to be able to get much better diagnoses.

Chris - And any really speculative, adventurous kind of out there. Thoughts about what it might enable us to do in the future?

Geoff - More or less anything <laugh>. I mean there's a view of what progress of humanity is that says that it consists of removing limitations. So when we were hunter gatherers, we had the limitation, you have to find food every few days. As soon as you start farming, you can store food and you don't have to find it every few days, you're living in the same place so you can store food. So that got rid of a huge constraint. Then there's a constraint that you can't travel very far because you have to walk or ride a horse. Well transport like trains and bicycles and cars and planes eventually got rid of that constraint. Then there's a constraint that we've got limited strength. And then the industrial revolution came, and then we got machines that were much stronger and human strength ceased to be something of much value. Now what's happening with these big chatbots is our routine intellectual abilities are being surpassed by them or soon will be. So all of those jobs that just need a reasonably intelligent person to do them are in danger of being done by these chatbots, and that's kind of terrifying.

Comments

Add a comment