How we hear
Our ears are extremly well adapted to help us hear and understand sounds, but sometimes they go wrong. Richard Turner, from the University of Cambridge, works on developing better hearing aids, so he's an expert on the ear and how it works...
Richard - Okay, so I'm going to tell you about the three stages that sounds passes through to go from soundwaves to brain waves. The first step happens here with the outer ear. This collects sounds and shapes them a little bit and passes it on to the second stage.
Chris - This is the bit you can see sticking out from the side of people's heads
Richard - Indeed. Yeah, these are your pinna, as the technical term is, on the outside of your head. And they collect sounds and pass them onto what's called the middle ear which contains 3 of the smallest bones in the human body. So, these bones are roughly about half a centimetre in size. They act to focus the sounds onto a very small area called the oval window. They make the vibrations of that window much greater than they would've been if the bones weren't there.
Chris - When people talk about the eardrum, where's that? What does that do?
Richard - So, the eardrum is where these bones collect the sounds from and then they move it from the eardrum or the tympanic membrane to the oval window at the end.
Chris - The pinna therefore collects the sound, funnels it into the bit you can stick your finger in and wiggle it. And at the end of that is the eardrum, that's going to vibrate and soundwaves, the vibrations hit it, then the bones pick it up.
Richard - Then the bones pick it up, amplify it a little bit along the way and then wiggle around the oval window at the end of that process.
Chris - If we look inside the oval window, it sounds like one of these old children's telly from my youth. If we go through the oval window, what's in there?
Richard - This is where the third stage is, then the really interesting stuff happens. In there is a fluid-filled space and the oval window is wiggling backwards and forwards in time with the sound coming in and that moves the fluid around. Sitting in the fluid is a very clever membrane called the basilar membrane. The membrane at one end is very, very thick and stiff. At the other end is very thin and pliant. If high frequency sounds come in, if we take one of Rob's tuning forks with a high pitched tone, that will wiggle around the thick end of the basilar membrane that's very stiff. And it won't wiggle around the pliant end very much at all. But if we take a tuning fork with a low frequency, then that will excite the pliant wiggly end and won't move the stiff end at all. So, the basilar membrane breaks out the frequency content of sounds.
Chris - When I'm listening to you speaking, there's a whole range of different notes and frequencies mixed in and I experience those as your voice. But you're saying that my inner ear, my cochlea, this basilar membrane, because it has different parts of it that vibrate more or less with different sound frequencies, different parts will therefore respond to different sounds. What happens when it responds and vibrates? What does it do next?
Richard - So now, we've got mechanical motion. How do we turn that into brain waves now? Sitting next to the basilar membrane are a line of very special cells called hair cells. They're called hair cells because they have little protrusions coming out of them.
Chris - Don't tell me, I bet they're called "hairs!"
Richard - They are called hairs, indeed! As the basilar membrane moves, it displaces these hairs that come out of these cells. That's what transduces this mechanical vibration of the basilar membrane into electrical signals inside the brain. So, we get signals out in response to what the different frequency components are in a sound at any particular time.
Chris - Effectively, we're going to see different parts of this basilar membrane shaking more or less in response to different sounds and the nerve cells, the hair cells are picking that up and turning it into electrical signals.
Richard - Yup, that's right. You can think of it as a little bit like a piano in reverse. So, we play a piano and we move the strings and that produces soundwaves. In the brain, when a sound of a particular frequency comes in, it will excite the basilar membrane which is the string and cause a particular key to move. So, it's sort of like a piano in reverse, carrying out the analysis of the incoming sound.
Chris - When someone goes deaf, what's gone wrong with system?
Richard - There are a number of things that can go wrong and cause hearing loss. One common one which affects lots of people is that their hair cells get damaged. So for instance, if you go to lots of rock concerts and listen to very, very loud music, that can cause what's called noise induced hearing loss. You can have diseases which affect the hair cells and perhaps the most common form of hearing loss is, as you get older, these hair cells tend to perform less well. In particular, that happens at the high frequency end. So, as people tend to get older, they find it harder and harder to hear high frequency sounds.
Chris - Do you quite literally get deafened by the sound of your own voice?
Richard - There are mechanisms that stop you get deafened by your own voice. So, this middle ear...
Chris - I can just see Nicole, our opera singing colleague is nodding her head, because you have to sing pretty loud. I mean...
Nicole - My partner says he's going deaf just by living with me in proximity.
Chris - But sorry, Richard. You were going to say, why don't I deafen myself quite literally when I'm shouting at a rugby match or something?
Richard - When you speak very loudly, you engage muscles which connect to the 3 bones in the middle ear which stops them moving around so much. So then the transduction from the eardrum to the oval window is less strong and the vibrations caused in the cochlea are less strong too.
Chris - When someone does become hard of hearing and they need some help, what sorts of help can we give people?
Richard - At the moment, current hearing aids work in the following way. If you lose your hair cells, they respond less vigorously to the incoming sound. And so, a sort of a simple way of compensating for that is to make the sounds much louder and you do that in a frequency specific way. So, if you can't hear high frequencies, we boost the high frequency volume of the incoming sounds and that's what a current hearing aid essentially does.
Chris - When you see someone with that little thing behind their ear and a little pipe going into their ear, what's that actually doing then? Is that pipe carrying the amplified sound just into their ear canal?
Richard - That's right and passing it onto the system which then processes the sound in the normal way.
Chris - If someone finds that's not terribly useful, is there anything even better that you can offer or is there anything else we can do?
Richard - Yeah. So, that's something that we're working on. So, one of the things which people with hearing loss find very difficult and challenging is noisy environments when there's lots of background, often environmental, noise going on. We've become sort of very interested in studying the properties of those environmental noises and developing methods to automatically remove them from speech and music sounds that you might be perhaps more interested in.
Chris - So, if we were down the pub having a conversation, I, with normal hearing, find it relatively easy to zone out all that other noise that's going on around us and focus on what you're saying to me. A person's hearing aid wouldn't have that ability so, therefore, it would just amplify everything and they're going to get really deafened quite literally, by all these onslaught of noise because it's indiscriminate. That's what you're saying...
Richard - Yeah, that's right. So, the amplification applies to everything. It's not just the signals that you're interested in like the speech or the music. It's also going to amplify all the background noise. The question is, can you build intelligent devices that only amplify the signals of interest and maybe supress the background noise a bit, so they're not so audible.
Chris - Can you do that?
Richard - No. So, this is technology definitely for the future. But we've got some interesting demonstrations of, sort of a proof of concept at what we can do in this area. So, we're going to listen to a short clip of some environmental sounds and these are the sorts of sounds which people with hearing impairment find very distracting if they're played in conjunction with speech sounds. They find it very difficult to hear.
Okay, so in the clip, I start by a campfire and then you can hear my footsteps as I walk along and I walk past a little stream, you can hear in the background here. And then wind gets up, and then it starts to rain. So, I unzip my tent and here, you can hear the raindrops just coming in and I get into my tent just in time.
Chris - I think I went on that holiday as well.
Richard - What's perhaps surprising about the sounds that I just played to you is that each one is entirely synthetic. So, they were produced by taking a statistical model, training that on a short clip of sounds. And then the model, because it's learned the statistics of those noises, is able to produce synthetic versions of arbitrary length which look very different from the original but sound just the same.
Chris - Those are not real sounds. A computer has churned those out based on having learned what those should sound like.
Richard - That's right. So, it's learned things like the statistics of falling raindrops in the raindrop example and what the raindrop sound like individually and how quickly they arrive. And then they're able to synthesise new rain sounds automatically which a listener cannot tell the difference of.
Chris - Can I guess then that where you're going with this is, if you can pretend what a sound sounds like, you can therefore, digitally subtract- you can take that signal away from the sound coming in to say, a hearing aid, leaving behind what the sound would be like without that.
Richard - Absolutely. so, that's what we're trying to do. So, the analogy here, I often use, is of a spam filter. An email spam filter can detect your emails and ones that should be arriving to you versus spam. The way it does it, is it calculates the statistics of spam and the statistics of your emails based on word frequencies. Based on those differing statistics, it can figure out what's spam and what's a real bonafide email. And so, we're trying to do the same thing with audio signals where the spam is environmental noise and the signal that you want is speech or music.
Chris - If someone needs a more radical intervention, what's the difference between what you've been describing and say, a cochlear implant? What's one of those and how does that work?
Richard - So, a cochlear implant is surgically implanted into patients who are almost completely deaf. So, they often have no functioning hair cells at all. And so, we need to cut the hair cells out of the loop. Amplifying the sound just isn't going to work because there's nothing there to pick up the sound anyway. And so, what you do is you implant an array of electrodes inside the cochlea that then interface directly with your auditory nerve fibres and bypass the hair cells completely.
Chris - If I therefore send little electrical signals down those electrodes, they will directly activate the nervous system, fooling it into thinking it's heard a vibration, a soundwave at that frequency, and then those signals go into the brain, and the brain thinks there must be a sound there. But it's actually an electrical signal.
Richard - Yeah, that's right. So, you have a microphone outside on the side of your head which then communicates by a radio sensor to the implant inside the patient's head.
Chris - Who's got some questions for Richard?
Ben - Ben and I'm from Barrington. How long are these hairs?
Richard - That's a good question. I don't know precisely, but they're extremely small. We're talking about micrometres, perhaps even smaller than that.
Chris - Micrometres, being a millionth of a metre.
Richard - Exactly, yeah.
Chris - Tiny then.
Ailish - Hi. My name is Ailish and I'm from Cambridge. I don't know if this is your area or not - if there was any stem cell research in growing those hair cells?
Richard - Yeah, there is. Again, it's totally not my area at all, but I think there has been recent success in actually starting hair cell regrowth. So before, I think people really struggled and couldn't find a treatment that would cause hair cells to regrow, but I think there's been a recent breakthrough in that regard.
Norman - I'm Norman from Barrington as well. Thinking about bats, how does a bat's ear differ from a human ear because they can hear very different sorts of frequencies, can't they?
Richard - They can. So, I know almost nothing about bats. One thing I do know is that, a bat would deafen itself if it didn't protect itself against its echo locating sound because it's going to try and output an extremely high amplitude sound which if it had its ears turned on at that point would deafen it.
Chris - Bats are officially louder than The Who. The Who are the loudest rock band in history. Bats squeak at more than 120 to 130 db. The Who, I think, got 110 at one of their concerts.
Richard - So, the bat has a mechanism, very much like the mechanism I talked about for the middle ear, where it turns down the gain on its auditory system to protect itself against deafening itself with its echo location.
Chris - Ginny...
Ginny - We've got a question that's come in on Twitter. In response to your thing about the bones tightening so that you don't deafen yourself, Charlotte Hill says, "Does this explain why sometimes when people talk loudly, they don't realise how loud they are?"
Richard - It could well do, yeah. So, there's a number of things that are going on there because you also hear yourself through bone conduction as well, which is what makes it hard for singers sometimes to figure out what pitch they're singing at and exactly how loud they're trying to sing as well.
Ginny - That's why the first few times you hear yourself back on the radio, you think, "Oh my goodness! Do I really sound like that?"
Richard - Indeed. You sound very, very different to how you expect to sound. And that's because you're listening via your sound conducted by the bones in your head rather than through the pathway I've been talking about.
Anthony - I'm Anthony from Cambridge. Why are some people tone deaf?
Richard - I don't think that's anything to do with the early auditory processing pathway I've talked about. There is a lot we don't know about how perception of sounds and the neurobiology underlying that perception works. I think tone deafness is something which happens much further up the pathway. And so, the answer is, I'm afraid, that we just don't know.
Chris - Nicole...
Nicole - It's true, what Richard is saying about the neurobiology and we don't in fact know. There are some theories that it's possibly genetic and that it develops as you say, further down the line. There's a slight difference between tone deafness though and someone who just sings off pitch. It could also possibly be a technique in that you don't actually know how to sing. So, even though you can hear the pitch in your mind and you think you're getting it correctly, you might not be supporting the voice, you might not be opening the back of the throat and often, you might not be dropping the tongue. There's a whole load of technique around the embouchure and the support that you need to make a sound. So, if someone who has a nice voice and they can hear the pitches, but they're slightly under the pitch, we would call flat. So, your tone is flat. That could possibly be, your ear is right, but your technique is off. So, that can be fixed. Tone deafness can't. I have had a tone deaf student and it broke my brain a little bit.
Chris - As well as your hearing. Thank you. Dave, you've got an oven shelf in your hand. What are you pair planning now?
Ginny - So, this actually links in really nicely from what we've just been talking about - bone conductance. What are we going to do, Dave?
Dave - So first of all, we need a volunteer from the audience.
Ginny - What's your name.
Boris - I'm Boris and I'm from Hungary.
Ginny - So, what are we going to ask Boris to do?
Dave - I've basically got an oven shelf here hanging by a piece of string.
Ginny - It's a bit of a weird thing to be carrying around with you.
Dave - These things happen, you know.
Ginny - And it's one of the kind of oven shelves that's like an open grill type thing, that you then put your tray of potatoes on top of in the oven.
Dave - That's right. So, it's just a metal grill and I can hit that with a piece of metal.
Ginny - Makes vaguely pleasant noise. What do you think?
Boris - Yeah.
Ginny - It's not particularly exciting though.
Dave - It's pretty dull. It's quite tinny, quite high-pitched. Now, what I want you to do Boris is wrap the piece of string it's hanging on around a finger and then stick that finger onto the kind of fleshy flap of skin which goes over your ear.
Ginny - Okay, so you're going to have an oven shelf on a piece of string dangling from your ear. We promise you, this is for science. It's not just to make you look silly.
Dave - So, if you lean forwards so the oven shelf isn't touching anything and now, I'm going to hit the oven shelf.
Ginny - Did it sound the same or did it sound different?
Boris - No. It sounds a lot lower.
Ginny - A lot lower pitched.
Boris - Yup!
Ginny - Anything else that was different?
Boris - It was a little louder.
Ginny - It's got louder and lower pitch. Now, it sounded exactly the same to me, but Boris has got his finger in his ear with a piece of string and it sounds different to him.
Dave - It's a bit rubbish that only Boris can hear this. So, we're going to try and get it so everyone can hear this now.
Ginny - Okay, so now, we're going to replace Boris's ear with a microphone and that hopefully means that everyone here and everyone at home will be able to get the same effect that Boris was getting. Dave is now constructing another piece of string to tie the oven shelf onto the microphone. He's holding the microphone from a piece of string. So, the microphone is dangling and then below that, the oven shelf is dangling. Shall we hear it one more time, what it sounds like normally?
(sound) Okay, and now, if we let it go...
Dave - So now, you'll be able to hear the sound which is coming up the string as well as the sound coming through the air. (lower sound)
Ginny - Everyone hear that?
Audience - Yeah. (applause)
Ginny - What's going on there, Dave? Why does it sound so different when the string is attached to the microphone or your ear?
Dave - First of all, if I just hit this normally, and if you feel it now.
Ginny - It's vibrating. It kind of tickles my fingers.
Dave - So, it's always vibrating at both the high pitches which you can hear normally and also, those low pitches. But the thing is with low pitches, they can't efficiently get into the air. It's a bit like, if you try and make waves with a finger in water, if you move your finger very, very fast, the water hasn't got time to get out of the way and you make waves. If you move your finger really slowly, the water's got plenty of time to get around it and so, you don't make very big waves.
Ginny - You just end up sort of moving your finger through the water, like the water doesn't come with you.
Dave - That's right. The oven shelf is vibrating very, very fast, that can make waves in the air, soundwaves, you can hear it...
Ginny - And those fast ones are the high pitches, the kind of tinny sound.
Dave - That's right. But the low pitches can't get into the air, so they can't get into your ears. But if you've got a stiff piece of string, if one end of the piece of string moves, then the other end of this piece of string is going to move. And that sound goes straight up the string, in through the bones and goes straight to your ear. And so, you can hear the low pitches as well as the high pitches.
Ginny - And that's why it sounds really different when it's connected either to your ear or I guess, the same happens with the microphone. The vibrations travel up the string to the microphone and you can pick those up directly.
Dave - That's right and that's another reason why, if you ever seen old western films, you see people putting their ears to the track because any low frequency sounds which will go down a track much more effectively will travel a lot further, can't get out of the track very well. But if you put your ear on it, it will go straight in through the bones, into your ears and you'll hear them really well and you'll know where that train is going to come so you can ambush it...