Dr Bob Carlyon, MRC Cognition and Brain Sciences Unit
Part of the show How We Hear, Echolocation and Giant Whoopee Cushions
Chris - We all take our ears for granted, I think. How do they actually work?
Bob - Basically, sound is vibration in the air and it is picked up by the floppy bit on the side of your head. This is called the pinna. Sound is then transmitted to the inner ear, inside of which there's a membrane which is thin and stiff at one end and wobbly at the other end. The thin, stiff bit vibrates most to high frequency sounds and the low, wide, wobbly bit vibrates most to the low frequency sounds.
Chris - And that's the bit called the cochlea, isn't it?
Bob - Yes, that's a membrane called the basilar membrane inside the cochlea. There are an array of receptor cells along the length of that, which pick up the vibrations and transmit it along neurofibres to the brain.
Chris - So at different points along the cochlea, you're literally vibrating some bits more than others and this is creating electrical signals in the nerves that the brain can understand.
Bob - That's absolutely right, yes.
Chris - If I stand in a noisy room, I've got sound bombarding me from all angles and in both ears. But you then say something from the other side of the room and I can focus in on just the sound of your voice. So how the hell do I do that?
Bob - That's right. It's particularly impressive that we can do it because my voice and other people's voices are at least physically quite similar. So the brain uses two or three different tricks. One of these is that you've obviously got two ears. For example, my voice might be louder at one ear than the other, and we can use those differences. But even if you're listening to something on a mono radio and there are lots of people arguing on the same programme, you can still separate out the sounds. What the brain uses is that different frequency components of the same person's voice will tend to start and stop together at the same time, and they will also share a common pitch, and the brain can use a pattern recognition method to group those things together.
Chris - Can your ears literally tune into certain sounds then?
Bob - The ears are pretty good at tuning into individual frequencies, but the problem is that my voice contains lots of frequencies. It's got the high frequency hiss of the fricatives, and the low frequency parts produced by vibration around my nasal cavity and my voice, and what the brain has to do is group together all those little bits together and ignore other frequencies that might belong to someone else. That's the cunning bit.
Chris - So once it's been converted into electrical nerve signals, where do those signals go to get interpreted?
Bob - There's lots of processing in different neural pathways all the way up the auditory system. So in vision, the retina does a bit of the work and it goes straight up to the visual cortex without much in between. But in the auditory system, there are lots of nuclei in the brain stem; one called the cochlea nucleus, and there's another one called the inferior colliculus. By the time it gets up to the auditory cortex on the surface of your brain, quite a lot of processing has already taken place.
Chris - I was reading a wonderful piece of research the other day about earworms; songs that go round and round in your head and you can't get rid of them. What's going on there?
Bob - It's an interesting question. There's a certain type of musical hallucination which people hear. For example, some people can force themselves to imagine sounds, which we all do, but some people just get songs stuck in their heads and they can't get rid of them. Some people have done brain imaging work to identify the area of the cortex, although not primary areas of the cortex, but secondary areas. One was dubbed the 'football's coming home' part of the brain because the particular person who was in the scanner showed activation of this area when the particular song came on.
Chris - And are we any closer of getting rid of it, because it's damn aggravating!
Bob - Not for songs that keep going round in your head. I think you're best off going down to the pub and acting like a fruit fly! There's another irritating sound that you get, which is called tinnitis. It's the bells that you get and the whistles and hums, and it can be very very debilitating.
Chris - Is that the same phenomenon?
Bob - No, not really. I think tinnitis is basically the brain interpreting activity that is going on in the periphery of the auditory system. Sometimes tinnitis occurs as the result of or following some hearing loss or an event in the area.
Chris - Sometimes if someone loses something, such as part of a limb, then the person who's lost the limb feels as though they can still feel it. They can also experience phantom pain. One suggestion that I did hear from somebody is that when you have long term exposure to lots of loud sounds, it damages certain parts of the cochlea that would turn certain frequencies into signal sent to the brain. As a result of damage or losing certain parts of the cochlea, in the same way that phantom pain hurts, the tinnitis is the equivalent in the auditory system.
Bob - I think that one way of looking at tinnitis is the brain responding to spontaneous activity firing the auditory nerve fibres, which we all get, as some kind of threat signal. There's some interesting research showing that in a third of the cases when people get tinnitis, it actually follows a stressful event in their life. So in some cases there isn't really anything particularly that has gone wrong with their ear. But it may just be that their brain has started interpreting this signal as being something as a threat. This makes it sound more threatening to them, and they become more stressed.
Chris - Talking about stress signals, can you explain to me why it is that when somebody puts their fingers down a blackboard, it's so unpleasant?
Bob - There was an interesting paper on that. I think if you ask most scientists what frequency components of that sound would be the most irritating, you'd say it was the high frequency components of the sound. What these scientists did was to filter the sound in different frequency regions and then present these transformed versions of the sound to people to find out which bits were spine chilling. The paper was called 'Psychophysics of a chilling sound' and the surprising finding was that it's the low frequency components of the sound that are responsible for that horrible shiver up your spine.
Chris - But why do we get it? One theory that I read was that when an animal is subjected to horrendous events such as a lion biting into the back of an antelope that's trying to run away, it makes a very high pitched sound. This carries very far, and lots of other animals hear it. This alerts you and galvanises you to run away; it's a danger signal. Is that perhaps what's behind this?
Bob - Well it might be, but then there are lots of other dangerous sounds that you might hear. I could play you the sound of a lion in your ear and it wouldn't make your spine tingle in that way, or your toes curl.
Chris - No, but the sound of the animal in distress would. The sound of an animal squealing is similar to the fingers down a blackboard.
Bob - Yes but what we need to know is why it is that some of those sounds that are distressing make you feel like that and others make you think, poor animal. I think there are some low frequency modulations in the sound, and that may be activating some brain structures, but I don't think anybody's really looked at it in any great detail.
Chris - Let's look at when hearing goes wrong, because people obviously do go deaf and hearing becomes less acute. Is that because the cochlea is losing nerve fibres or cells that do that conversion process?
Bob - It's usually the receptor cells that die off. There are usually two types of cell that act on the basilar membrane. One of them acts purely as receptors and the other acts as a mini amplifier, if you like. They kick energy back into the sound and make that bit of the basilar membrane a bit more picky or selective about the frequencies it likes. Often those things are the first to go.
Chris - So when we lose them and we want to restore them using this cochlea implant technology, how does that work? What does that do?
Bob - I'd just like to say that the standard treatment for people with hearing loss is still a conventional hearing aid, and cochlea implants are really for people in whom the receptor cells have completely died off or are doing rather poorly. But a cochlea implant looks a bit like a normal hearing aid. It's worn behind the ear and there's a microphone attached to it. There's a little radio frequency transmitter that's worn on the surface of the head just above the ear, and that transmits energy to a little receiver, which is implanted inside the person's head. This then sends electrical impulses to electrodes inserted inside the inner ear. Basically, high frequency sounds go to electrodes which are located on that bit of the membrane that would normally encode high frequency sounds in normal hearing.
Chris - How good is it?
Bob - It's pretty good if you're listening in quite surroundings. Speaking in quiet face-to-face or on the telephone, many patients do extremely well. The problems occur first of all when there's more than one person talking at a time. The other situation is when they're listening to music or listening to singing.
Chris - You've given us a sample. Let's have a listen to what a piece of music would sound like if you're listening to it with a cochlea implant. [sound] That doesn't sound like they'd enjoy that concert very much.
Bob - Not very much. And what's more, they can't hear what the person is singing either.
Chris - Shall I actually play the normal one now?
Bob - Yes [sound].
Chris - A bit of Ella Fitzgerald there. Now if I play the first version, it's amazing because you can almost hear what you should be hearing. [sound] But why is it so bad, the rendition? Why are they not experiencing the wonderful sound that most of us are?
Bob - I think the reason is that you can hear the words that are being said, and really when cochlea implants were developed, that was the main aim because people needed to speak and understand what people are saying to them. But they weren't really designed with pitch perception in mind, so basically the way in which pitch is encoded in cochlea implants is quite different from the way that it's encoded in normal hearing.
Chris - Can it be improved?
Bob - It can possibly be improved, yes. There are certainly small incremental improvements being made all the time. One of the things we're looking at is whether there's any sea change that can be made. In other words, we're looking for the possibility of a more radical way in which sound is encoded.