Conversations over Cocktails: How we Converse in a Crowd

Picking out a single voice in a noisy room can be a challenge. Our ears are assaulted by a wide range of noises that compete for our attention, yet somehow we are still able to...
14 April 2012

Share

Picking out a single voice in a noisy room can be a challenge.  Our ears are assaulted by a wide range of noises that compete for our attention, yet somehow we are still able to enjoy a coherent conversation.  Now, research suggests that the way we move our heads may have a significant impact on how we interpret the sounds that reach our ears.

EarHow we distinguish different streams of audio has been studied for a long time.  A very basic example is distinguishing between two tones in different patterns.  If you listen to a basic two tone pattern that simply alternates ABA- (where "-" indicates silence), it's almost impossible to hear it as anything other than a sequence.  But as we hear it looped (ABA-ABA-ABA-ABA-...) the brain will adapt and separate it into two streams, so we can distinguish A-A-A-A-... and -B--B--B... However, should you then rapidly move your head, this adaptation briefly goes away.

But why should moving your head, even when the auditory environment doesn't change, cause you to lose this adaptation?  To untangle the potential consequences of head movement, Hirohito Kondo and colleagues at the NTT Communication Science Laboratories in Japan set up a robotic mimicry study, and describe it in the journal PNAS this week.  A special robot, known as the Telehead, was set up to exactly mimic the head movements of a series of volunteers.  Microphones in the robot's ear canal broadcast sound to the volunteer's headphones, so they heard exactly what the robot was hearing.

Alternating ABA sound sequence

A looped ABA-ABA sound sequence

A series of sounds were then played to the robot, while the volunteers were asked to move their head.  In a second series of experiments, the volunteers remained still while the robot repeated their previous movements.  Crucially, this would present the same acoustic cues to the volunteer, but without the changes in movement or attention.  This allowed the researchers to see if the change in sound, or the movement of the head itself, was responsible for the change in perception.

They found that a sudden change in sound, either through intentional head movement or the change in auditory cues from the robotic head moving, caused a "resetting" in interpretation of the sound.  The authors argue that the brain must rely on both head position and the relative difference in sound between our ears in order to interpret the audio environment, and pick out the gossip over cocktails.

Comments

Add a comment