Brain waves sync up when two people talk
Interview with
You’ve probably heard people talk about “being on the same wavelength” as each other, and we tend to use phrases like, “I get what you mean,” when we’re explaining things to one another. And, as an intriguing experiment has shown this week, it turns out that, when we really do “get” what the other person is saying, we’re syncing up our brain activity with theirs! Sam Nastase, who is at Princeton, had the opportunity to work with two patients undergoing surgery for epilepsy for which they had received brain monitoring implants ahead of their treatment. He put the two of them together so they could converse, while their patterns of brain activity and what they actually said to one another were recorded in real time. And by using AI to analyse the words and contexts of what they said, and relating this to the brain recordings, it was possible to show identical patterns appearing in the brain of the listener and the speaker…
Sam - We were lucky enough to have two epilepsy patients in the room together at the same time. Right. Both of the patients have electrode grids placed directly on the surface of their brains that is basically recording electrical activity as the neurons in their brains are firing while they're having a face-to-face conversation with each other. So we get neural activity measurements in both brains at the same time. We're recording all of the audio in the room so we know exactly what they're saying when we can describe that. And that's kind of where we start in terms of modelling.
Chris - So what, to my naive mind, you are doing is asking, when one person outputs some data, we can look at what the pattern of brain activity is. Then we can look at the recipient of that data and we can see if their brain basically recapitulates the pattern of activity that came out of the first person's brain arguing. They are definitely on the same wavelengths of each other. You've recreated the same neurological pattern in the recipient as the emitter.
Sam - That's exactly right. And in the past we've tried to do something like this directly. Where we look at how correlated is my brain activity with your brain activity. That's interesting, right? Because it can really tell us like, how closely are two people coupled? How strongly correlated are they? It can tell you maybe where in the brain there's brain to brain coupling. But it doesn't really tell you what the content of that coupling is. It doesn't tell you what information in my brain is driving brain activity in your head. And so that's where we try to bring in explicit models of linguistic content, right? Because a conversation could be driven by all sorts of things. Like you know, gestures and facial expressions and things like this, right? We don't really know what's driving the brain to brain coupling. So we're going to bring in models, we're going to test different models to see what information is really shared between the two brains.
Chris - And how do you analyse the speech, then, to work out what the content is in order to then ask how that relates to the brain activity. Because that's the nub of what you're saying to me is it's what is the content and what elements of the content and in what context they're being presented between the two that is then producing this kind of resonance in the other one's head.
Sam - Yeah. So we start with the transcript and we're going to feed these words into different kinds of language models, right? And we had hypothesised that this new generation of large language models will best fit the human brain activity. Because these models really capture the meaning of words in context.
Chris - So unlike where we might have a problem, if I said to you, you were cold, I could mean you are thermally challenged. On the other hand, I could mean you're extremely unforthcoming in this interview <laugh>. And unless I know the context and you know the context, then actually a model wouldn't know. So that's what you're trying to say. You're trying to extract a lot more information about the words which will make your interpretation, I suppose, a lot more accurate.
Sam - Yeah, that's right. Models like GPT can really sculpt the meaning of each word as you're going through a conversation and represent that word in a much more specific context sensitive way.
Chris - So when you do this, what do you actually see then once you start subjecting the brain activity patterns to this sort of scrutiny with this very tight interpretation, much more accurate now of the context of the words being used by the speaker and the receiver. What emerges?
Sam - Yeah, so we use these rich context sensitive representations from a model like GPT and we're going to run these representations up against the brain data. We can see this really beautiful dynamic where just prior to each word, word by word, as I'm speaking like this. Just prior to each word, the model can capture linguistic content starting to emerge in my brain activity prior to the articulation of the word. And then as soon as I say the word, we can see that same linguistic content reemerging in your brain as you hear the word. And as you process the meaning of it.
Chris - If you mix up people from different cultures, backgrounds, linguistic backgrounds, would you get a different result here? Because there's this old saying, isn't there lost in translation? I presume the people here were from the same sort of cultural and linguistic background. So this sort of synergy occurs. But would you expect a difference if I put someone from the other side of the world who didn't naturally speak English, it was their second language in the room?
Sam - Yeah, absolutely. If you have, for example, a non-native English speaker who's not particularly proficient in English, right? I think that using this kind of modelling approach, you would be able to see that their brain is not as easily keeping up with your brain or they might have to do more neural work to keep up with your brain in a way that might be much more sort of easy for a native speaker.
Chris - I presume the two people who were chatting got on okay. What would happen if they disagreed? What about if you started a row between them <laugh>? Would you start to see them reacting quite differently? Because obviously one's brain is syncing up with the other because they like each other, they're agreeing with each other. What about if they fell out?
Sam - I don't think we have particularly good examples of that in this relatively limited set of conversations, but you can certainly imagine scenarios where we are not aligning, right? Like our brain activity is not aligning. Or maybe, maybe I'm trying to kind of push you in a particular direction, or maybe I'm even trying to deceive you or something like this, right? There are a lot of interesting scenarios where alignment is not really the only goal. We're not really able to look at that with the data that we have. But I think it's a really interesting question.
Comments
Add a comment