Professor Ian Cross, University of Cambridge
New research published in The Royal Society Open Science journal suggests that we hear music differently, especially when sentences that are grammatically ‘knotty’ or difficult. Graihagh Jackson met with Cambridge University’s Ian Cross to ‘pluck’ away at the findings…
Ian - The study looks at whether or not language and music have the same relationship to our experience. That is, do we use the same brain systems in experiencing language and experiencing music. Specifically, what this study does is looks at the ways in which we experience one thing following another. Now in language, we can see this very clearly - it’s called syntax. It’s a set of rules that are implicit, we’re not aware of using them but we’re aware when they’re breached. Music seems to have something similar going on.
Graihagh - I can imagine how in sentences there’s a rule, there’s a structure, there’s a grammar but in music how does that play out?
Ian - Well here’s a simple example… [music] It’s possible but you wouldn’t probably go there… And you certainly wouldn’t go… [music]
Graihagh - Yes, I can see what you mean; it just sounds wrong.
Ian - Yes, so we have intuitive expectations of what should follow what in music just as we do in language.
Graihagh - So it’s theorised that these pattern, rule breaking things are processed in the same part of the brain and the idea of this study was to work out what happened?
Ian - Yes. The idea is to present people with sentences, which have “knotty” bits, where they suddenly veer off in a weird direction like “the horse raced past the barn fell.”
Graihagh - Yes. I was sort of looking up thinking, yes, yes, okay, I’m with you but it’s not. I mean there’s easier ways of saying that.
Ian - There are easier ways of saying that but we can say it that way.
Graihagh - And they did this and played music at the same time to see what participants response was? Whether they understood or misunderstood the sentence...
Ian - Yes. They found that “knotty” bits in the music affected perception of “knotty” bits in the speech, and vice versa. What that suggests is that a prior hypothesis, the shared syntactic resource integration hypothesis - snappy title - was in fact correct. And this hypothesis suggests that in language you have representations of words, word meanings if you like stored in particular locations of the brain. In music, you’ve got representation of pattern stored in other locations of the brain. The bit that’s shared is the bit where legitimate order is sorted out. They suggest that there are common cognitive and neuro resources implicated in integrating the temporal structure in both language and in music.
Graihagh - So, there’s almost this overlap and the brain is trying to process two bit of information in the same place and that’s why you hear music slightly differently or misunderstand this “knotty” sentence?
Ian - That is precisely the case that the same resources are being used to work out or not whether a particular sequence is legitimate or not.