Music Technology And Recreating Sounds

The Naked Scientists spoke to Jez Wells, Department of Electronics, University of York
21 May 2006

Interview with 

Jez Wells, Department of Electronics, University of York

Share

Kat - Now we're going to chat to Jez Wells from the Electronics Department at the University of York. Jez and his colleagues are doing some really fascinating work into how we can recreate sounds that don't exist anymore. Tell us about it.

Jez - There are two ways you can think about sound. You can either think about sound as it arrives at the ear or you can think about sound as it is being made by an instrument. We look at both aspects; we look at physical modelling and spectral modelling of sound. Spectral modelling is all about sound and the nature of sound as it enters the ear. Physical modelling is all about the nature of sound as it is created by an instrument. When we are trying to create a physical model of guitar string we are interested in how long it's going to take a sound wave to travel down a string, up to the bridge of the guitar and then what's going to happen to that sound wave when it gets to the bridge. So some of it's going to be reflected back down the string and some of it's going to be passed on to the body of the guitar. So when we're building physical models of instruments, what we're interested in doing is working out how all these physical components of an instrument actually fit together and interact to create sound. Once we've created a physical model, what we then have to do is excite it. That means usually plucking it if it's a stringed instrument or blowing it if it's a bottle or a flute. Then what we're interested in doing is hearing. We need to put a microphone somewhere round our physical model. When we've put that microphone near to it, we might put it near one part of it to get one type of sound or another part to get another part. This is what recording engineers do all the time when you're miking up a guitar or a harp or something like that, you're always interested in getting the best noise. What we can do is you can take a physical model, which is a one dimensional physical model when waves are moving up and down the string. We can look at that in two dimensions by creating a mesh, if you like.

Kat - Are these all computer models?

Jez - Yes, computer-based models. So they're all imagined. Another important aspect of physical models is interacting with them, because having these things trapped inside a laptop isn't much fun. Interfaces and how we analyse human gestures and make them into musical gestures is another important part of physical models. But yes, the models themselves actually reside in the imagination of the modeller and inside the memory and processor of the computer.

Kat - So you've brought along some examples of your work today.

Jez - Yes, this is work that's been done by one of my colleagues, Dr Damien Murphy, who's been working on three-dimensional physical models. Now the great thing about three-dimensional physical models is that we can actually start to imagine spaces as opposed to one dimensional things such as a piece of string or two dimensional things such a the skin of a drum. Now the great thing about being able to manage spaces is that if you're an architect, you don't have to wait until the final bit of carpet is rolled out in your auditorium until you find out whether it sounds any good or not. Rather than having to build a small model, which used to happen, we can now put the architectural design into a computer and it imagines it is air sitting inside that space. Normally we're interested in an impulsive sound such as the clapping of hands and quite often we start of with a gun sound. We're interested in how the building responds. What happens when a gun is fired is that sound waves begin to move through the air particles. This is one reason why in a vacuum we can't have sound. It has to have a medium to travel through. But then it reaches certain objects: people, walls, that kind of things. Some of it will be absorbed and some of it will be reflected back and that's dependent on the nature of the sound, the frequency of the sound and so on. What we've got in this three-dimensional modelling is that we can say that even though this building doesn't exist, what would it sound like? This is quite handy for buildings that have been destroyed by fire or by warfare and for buildings that haven't yet been made.

Kat - We've got an example here, which I gather is the sound of Coventry Cathedral before it got blown up.

Jez - Yes, that's the old Coventry Cathedral. There's a fully functioning new Coventry Cathedral but the old one was destroyed during the Second World War. So what you'll hear is the impulse, actually, that's a choirboy. That was recorded in an anechoic chamber, which means a room without any echo at all. I hope what we'll hear next is what sounds like a gun being fired in a large space. Now what we're going to do is by a process known as convolution, combine the sound of the chorister with the sound of Coventry Cathedral. This is what this chorister might have sounded like had they been able to sing in Coventry Cathedral before it was destroyed.

Kat - Lovely. Not the sound of a chorister being shot, which I'm worried it might have been. We've got some other sounds here. You're working on how to combine the noises of instruments to make entirely new sounds.

Jez - Yes, having just talked about physical modelling I'll talk a bit about spectral modelling where we're interested in the make up of sounds. Hugh was playing different harmonics from a pipe. What happens when you combine those harmonics is the sound of an instrument. The reason that we can tell the difference between a trumpet and a violin playing the same note for the same duration at the same loudness is because the relationship between the harmonics, the loudness of the harmonics is different. So with spectral modelling what we're able to do is break sounds down into these fundamental ingredients. It's a little bit like taking a cup of coffee and being able to work out how many sugars it's got in it and how much milk it's got in it and what the coffee beans were. Then you can begin to say, well what would happen if I added ten sugars or what would happen if I took all the milk out and added cream instead? So one of the things I'm interested in is creating sound hybrids. One of the things I've produced is something called a floboe. A floboe is what would happen if a flute and an oboe were to get together and have a baby instrument. The idea is not to combine the two sounds so that it sounds like two instruments being played, but so they're somehow being fused together. In physical terms, what we're doing is taking the excitation part of the oboe, which is the reed that vibrates, and we are imposing it on the resonant structure of the flute. We've also got some of the breathiness of the flute thrown in for good measure as well.

Kat - Right, let's hear this. So we've got the flute. And now we've got an oboe. Which in my opinion sounds like a duck dying. Hopefully something more beautiful is a floboe. You can really hear the two different characteristics.

Jez - Well that's the idea. A lot of people expect music technology to be about amazing swooshes and amazing synthetic zap sounds. But actually music technology has hopefully now started moving into an area where those kind of special effects can still be had but we can move into creating sounds which are much more acoustically plausible that sound like they could have been made by a real physical object. It's slightly unflattering having it in an anechoic chamber like that, and it's great for us to analyse sound, but it's not so great for general recordings. Then you'd want a nice big auditorium such as a cathedral.

Kat - And the final example you've brought for us is something that really doesn't exist anymore, and that's the sound of a castrato. Now how did you make this?

Jez - Well this was for a programme that is going to be on BBC 4 some time this summer. It's a series about the 18th century and they wanted to make a programme about castrati.

Kat - Tell us briefly what castrati are.

Jez - Castrati are boys that had exceptionally good voices and were selected for castration. It was more of a snip rather than a hack although I'm sure that wasn't much consolation. It's not quite as brutal an operation as some people have thought that it might have been. However, the reason that that was done was to preserve their vocal folds or vocal chords. That would prevent their vocal chords from thickening and elongating during adolescence, but the body itself would continue to grow and the mind would continue to grow, so all those years of training would pay off. So what you end up with is a man with a large vocal tract and a huge power supply, a big pair of lungs, in control of a pre-adolescent's vocal folds.

Kat - So let's have a listen to this. We have first the sample of a tenor.

Jez - Yes this is a sample of a tenor singing a Handel aria that was written for castrati (sound).

Kat - Now we have the sound of a treble.

Jez - Now the treble is singing this an octave higher and the idea is that we really want the resonant structure, the lungs and the vocal tract of the guy we just heard, but we want the vocal chords or the excitation of the guy we're about to hear (sound).

Kat - And now we're going to hear them both together (sound). That's incredible.

Jez - It's still work in progress. There are a few wobbles there and one of the difficulties that we're having is that a boy's voice tends to be very breathy because they don't have the same kind of control as an adult singer. So we need to find ways of removing that breathiness. The final version should be aired some time over the summer.

Comments

Add a comment