How the brain processes language

Could knowing more than one tongue leave you tied?
14 May 2024

Interview with 

Mirjana Bozic, University of Cambridge

TALKING

A picture of someone talking, drawn on a blackboard

Share

So, what do we know about what is happening in the human brain when someone is learning a language? I went to meet Mirjana Bozic from the University of Cambridge’s department of psychology. Mirjana is a cognitive neuroscientist who studies language…

Mirjana - In order for us to start understanding what's happening in the brain, we need to narrow it down to what type of information within the language signal we want to look into. This is something that linguists have defined for us. For instance, when we are speaking now, we are processing the sound of different words or individual elements within these words: phonology. When we are processing the meaning of individual words and the sentences, this is now known as semantics. If we are processing how different words are put together in sequences to know if something is a grammatical sequence in language or not, this is known as syntax or grammar. It's basically been shown that all of these different types of information, their processing can be differentiated in the brain.

Chris - When one thinks about the process of us communicating, it starts with me making a sound that goes to your ears. At what stage do your ears extract from the sounds that are coming in - because there are sounds in this room, there's us wriggling on the settee - and it extracts just the speech from that. How is that done?

Mirjana - This is done hierarchically, really. What has been shown is that the brain is going to be extracting, first, the very low level information from the signal. These would be frequencies, for instance. It's been shown that this is in early auditory processing areas of the brain. These are known as primary auditory cortices and these are parts of the brain that are in the temporal lobe. That's part of the brain that's close to your temples and close to your ears on both sides of the brain. Then, it goes progressively from there to extract more linguistic information, more language specific information, in areas that are a little bit further apart from the primary auditory cortices. So, there is this hierarchical processing of different levels of information from low level perceptual information to meaning of individual words.

Chris - So it pulls out what is speech first. Say there's five other people in this room and they're all having a conversation. How do I focus just on information from you if you're the person I'm talking to?

Mirjana - Well language obviously is working together, so to speak, with other cognitive processes in order to make it flow and so we can use languages as efficiently and effortlessly as we do. So in order to do that we would be using something called selective attention. This is the ability to focus on the properties of the signal that are relevant and to ignore the signal that's irrelevant and that is going to be more or less difficult depending on what you're focusing on and what the interference is. It's easy enough to differentiate between speech that you want to focus on and music, for instance, because they differ on all sorts of low level features. Whereas, if somebody is speaking in a different language, for instance, that's going to be slightly more difficult. If somebody is speaking in the same language that we are having a conversation in, that's going to be more difficult. So there are these sort of hierarchically different levels of processing that are more or less difficult to suppress depending on their features, their properties.

Chris - When I'm listening to you talking to me, I think my brain is anticipating what you are probably going to say next, and when that is the case, it's happy, when that's not the case, it's sort of peaking my attention. Is that how I'm staying trained on and paying attention to what you are saying? Is that how I'm decoding it on the fly?

Mirjana - We are certainly predicting quite a lot of what's likely to happen next. That's based on your general knowledge of language, the knowledge of the conversation that's happening. There is always this interaction between the top down processing and the bottom up processing, meaning that there is going to be information that's extracted from the signal itself - so low level properties of the signal - and then the top down knowledge of what's likely to happen given the language that we are speaking in. There is going to be integration between those two. Ultimately, that's going to give us this effortless way that we use language as easily as we do.

Chris - And what about if I come along and learn another language? How does my brain cope with that? Because I can speak French, so how does my brain know I'm in English mode, and I understand that and I think in that and I put the words together in the right order. But when I want to, I can flick a switch mentally and then I'm in French mode.

Mirjana - That is a question that has been a focus of a lot of attention scientifically in the last, I would say, two, three decades. What has been shown pretty consistently is that having two languages, this bilingual context, leads to competition between them. They're co-activated constantly regardless of whether one of those languages is irrelevant for the particular communicative context. So, for instance, we are using English now and our other languages are not really relevant for this particular context, but they're going to be at some level competing with the signal that we actually intend to use. That's likely to lead to some processes of selection and competition between the two. This has been one of the very interesting things in bilingualism literature that show that there is this consistent suppression of the irrelevant signal that's taking place in order to use the relevant signal.

Comments

Add a comment