How one man is copying the brain
Interview with
Another way to create artificial intelligence is to copy the human brain. But it'll have to be done in stages. Step one is to copy how our senses made sense of the world and Richard Turner is focusing on hearing. This has proven tricky though, as he explained to Graihagh Jackson...
Richard - Well simply put the brain is the most complicated that we know about in the whole universe at the moment. It contains, roughly speaking, 100 billion neurons which are wired in an incredibly complicated way and it's able to use that structure to process incoming information, make decisions about what it should do in light of that information action those decisions i.e. make your muscles move. At the moment, we know next to nothing about how that entire system carries out those three basic, fundamental operations, although by looking at particular parts of the system where we think we know what the functionality is, we are able to make some small progress.
Graihagh - When you say some small progress, you mean in copying or what you might call reverse engineering the brain?
Richard - Both. So I've been looking at the principles by which people process sounds and can they be used to developed computer systems for processing sounds in the same ways that people do.
Graihagh - How we process sound is remarkable. Right now, if you stopped what you're doing, how many different sounds can you hear? 3? 4? 5? 10? Yet, you can still listen to me and not be distracted by all those sounds. Now, think about it from a machine's perspective. How does it know which sounds to ignore and which ones to pay attention to? Well, Richard's got that sorted. by understanding the statistics of a sound, you can make a machine learn to distinguish different noises from a camping trip say...
Richard - In the clip I start by a campfire and then you'll hear my footsteps as I walk through gravel. Then I go past a babbling stream and the wind then starts to get stronger, and I unzip my tent and get in and it turns out I do this just in time because it starts to rain...
Now all of those sounds are examples of what we call audio textures. They're comprised of many independent events like individual raindrops falling on a surface which in combination we hear as the sound of rain. Perhaps what's remarkable about those clips is each one of those sound is, in fact, synthetic. It was produced by taking a short clip from a camping trip and then training a machine learning algorithm to learn the statistics of those sounds.
Graihagh - I mean, I think of a sound as sound; I don't see it or hear it as a series of statistic. So what do you mean by that?
Richard - Take the rain sound, for example. The rain sound when you take short clips of it contains different patterns of falling raindrops and so the thing that is constant through time is not the soundwave form itself, its the statistics; its the rate which these raindrops appear and the properties of the surface and so on and so forth.
Graihagh - Okay I see. So this computer has emulated those sounds - it's a synthetic sound. Why is that beneficial in helping you reverse engineer the brain?
Richard - Yes, that's a good question. At the moment this just sounds like a cute trick so...
Graihagh - I was going to say I'm loving it from a radio point of view. I could use that sort of computer in my line of work...
Richard - Indeed. But when we look at the way the computer algorithm does this, we think that it's using principles which are similar to those which are used by the human brain.
Graihagh - When you say we think, it makes me think you're unsure. Does that mean you're not entirely sure how the machine is working out or the other way round, you're not sure how humans are able to distinguish and refocus their ears?
Richard - We're unsure in the sense that we don't know whether the brain in the machine is operating in the same way that the brain of a person is, even thought it responds in a similar way.
Graihagh - But hand on a second. You've built this machine, how can you not know how it works?
Richard - Well this is one of the beauties of machine learning. I've been looking at are what's called unsupervised algorithms. So you just get data, the data is unstructured, you don't know what's going on, you don't even knows what's good to predict and so the algorithm itself has to figure out what the structure is. So, I think much of the future advances in machine learning will move towards these unsupervised settings and why I think they're really interesting is that's the setting that the human brain is faced with. It's not given a teacher which is able to give it the labels of millions and millions of objects, it has to automatically adapt (maybe it gets a few labels from your mum and dad), but the number of labeled examples you get is essentially zero when you compare it to what our current state of the art object recognition system is given.
Graihagh - So, in an ideal world, we'd scale this up. This is obviously just looking at sound but we'd scale this up to emulate what - the whole brain and in everything that we do?
Richard - To put it in simple terms, we have no idea of the software that the brain runs at the moment and it's going to take a long time to figure out details of that software that would be necessary to come up with, say, super-intelligent algorithms.
- Previous Will robots take over the world?
- Next Mimicking evolution
Comments
Add a comment