Struggling to hear in noisy places?
Do you find that speech is much harder to follow in noisy places? Does background din make what other people are saying trickier to understand? The good news is that you’re not alone: lots of people struggle, and it’s often because, as we age, we lose hearing acuity. By the time we’re 40, we can have as few as half the number of fibres in our auditory nerves that we had when we were 4. But traditional hearing tests are notoriously poor at picking this up. Luckily, “here’s” someone who can help! Speaking with Chris Smith, Harvard's Daniel Polley...
Daniel - My name is Daniel Polley. I am the director of the Lauer Tinnitus Research Center and I think we have discovered a new type of test for a very common but hidden hearing complaint. The problem which many of us, especially if we've reached the ripe age of say 35 or older, confront, is going to a crowded location like a bar, restaurant, or a pub in the UK, and we are listening to our conversational partner across the table and they're speaking loudly enough, but somehow our hearing keeps getting pulled into the conversation from the people next to you. You're having a hard time separating the voice of the person talking to you from the voice of the people you're trying to have fade into the background.
Chris - And what is the issue that's causing that to happen?
Daniel - Well, that's really the key question because many people that complain of this will go to their hearing specialist. The hearing specialist will do their standard set of tests and that type of test is useful for identifying a hearing loss. But it doesn't really help in identifying this hidden hearing complaint of being unable to follow conversations in noisy backgrounds.
Chris - And what fraction of people, when they experience this symptom, will go and have those tests and then be sent away saying, nope, your hearing's absolutely fine?
Daniel - Right? So the dark matter in the universe or all the people that have this complaint, but don't go to a hearing specialist because maybe they figure it's not worth the bother. But we have looked into the database at our clinical care center and we find that about 10% of people come in with a hearing complaint but are told that their hearing is normal.
Chris - So what can we do about it then? In terms of what have you managed to discover here, that means we're in a position to better inform people about the status of their hearing?
Daniel - So one way was to kind of ask how the first stations in the brain that process sound, how well they are capturing a signal that we nerdy scientists called frequency modulation. If you think of, for example the sound of an ambulance siren - wee ooh wee ooh - and the more sensitive you are to that little warble, the better able you are to understand speech and background noise. And for us that was a huge hint because we could try to zoom in and capture the way that the earliest stages of the brain were encoding these little warbles.
Chris - How can you make that appraisal when you can't necessarily drill a hole in someone's head and make those measurements? What sort of proxy markers are there for that sort of encoding that are sensitive enough that you can use them?
Daniel - The ear is tiny. It is intricate. It is locked into the densest bone in the body. It is really trying to hide from us. It's really inaccessible. So what we can do is put an electrode in the ear canal so that we can measure electrical signals from the ear canal generated by the early stages of processing. Then we can play those types of sounds, those warbles, and we can analyze with what fidelity do the electrical signals lock into those subtle changes in frequency.
Chris - And is there any way of then standardizing this: so you make those measurements, how do you then relate that to whether the person really does or doesn't have good hearing or struggles to hear in noisy environments? Do just ask them or do you play them other sounds to mask what you're presenting to them? How do you then work out how useful that is as a guide?
Daniel - That's one measurement, and then we need to relate it to how much they struggle with a challenge that resembles listening to your conversational partner across the table at the pub. So for that, we test their hearing when we challenge them with kind of a listening task that simulates listening in a crowded restaurant. They will have a conversational partner, we call him Fred, and Fred will read off a series of numbers, and meanwhile two people next to Fred that are spatially separated a little bit will read off numbers at the same time. And so the listeners task is to repeat back the numbers that Fred said and we'll see if they get confused by the numbers that the people next to Fred are saying. For instance, if they will latch into the wrong stream and repeat the wrong set of numbers.
Chris - And how do you tell how hard they're working at the same time? Because that was part and parcel of what we were saying, that actually the brain has to do a lot of heavy lifting here to disentangle this extraneous noise that's coming in. So how do you get at that?
Daniel - I'm glad you asked because we added another type of measure that's very different and it relates to what many of us know as effort. If we're listening for an hour, let's say half an hour, an hour, to that person in the crowded restaurant, we start to get a bit worn down. It actually requires a lot of effort to martial all of these cognitive resources like attention and prediction, memory - maybe you're trying to look at the mouth to integrate the visual signals with the auditory signals. And we cannot directly measure cognitive assets because there is no electrode for that, but what we can do is measure the pupil. That may seem strange, but when we're using more effort to solve a problem, our pupils get a little bit wider and so while they're doing that task of listening to Fred while trying to suppress the sound of Fred's neighbours, we're looking at changes in the diameter of their pupil. The people that are having to burn more cognitive fuel to use more effort to understand, their pupils are getting wider than people are having an easier time of it.
Chris - And then what do you put all of this information together? The sensitivity to the ambulance frequency modulation type sound, with the ability to understand and correctly remember what Fred has said, and then your pupil size, you integrate all of that together and this is the predictor.
Daniel - Exactly right. Knowing these three pieces of information we could predict with 80% accuracy how well you do in this type of listening challenge. What we don't know is can, on an individual basis, we bring you in and say, you know, this is, this is the issue with you - yes or no. So in that sense, a further development is needed to sort of refine the sensitivity in the selectivity of the test.