Science Interviews


Sun, 6th Jul 2014

A Conscious Computer?

Dr Joshua Brown, Indiana University

Listen Now    Download as mp3 from the show Morality and Motivation

And closing today’s exploration of morality and motivation, during the opening ceremony of the FENS meeting, I caught up with Joshua Brown.  He’s a computer scientist and engineer at Indiana University, America.  He models what happens in our brains when we make a mistake and how we experience disappointment when we find ourselves in unexpected situations.

Joshua -   In general terms, we form expectations everyday about the kinds of things that we think will happen based on what we know about the world.  For example, if you are meeting with someone, you might expect certain things will happen and if they don’t happen, you experience some kind of disappointment or likewise, if you are doing something and you make a mistake, you recognise that something has gone wrong because you first have an expectation about how things should work.  and so, what we’ve done is to take a lot of neuroscience data about this phenomena of forming expectations and evaluating whether what happens is consistent with your expectations or whether there's a surprise.  What we’ve built to model of is how the particular regions of the brain works. 

So, that’s the part that’s just directly above the middle of your eyes and back a few centimetres.  We built a model in a way that captures a lot of different cognitive neuroscience data, functional MRI, monkey neurophysiology, human electrophysiology that is the voltages off the scalp.  We’ve developed a model that can account for all these different kinds of data.  So, that’s really the approach I've taken, rather than trying to sort of work out a proper definition of what is prediction, what is risk, what does it mean to have that experience of, “Oops!  Something went wrong or something unexpected happens”.  Rather, what I've done is to simply look at the neural circuits and meet them on their own terms and ask, how do we get systems of interacting cells to produce the kinds of signals that we observed to be associated with forming predictions about what's going to happen and evaluating whether something unexpected happens?  Where that’s gotten us now is to a model – we have several models when more recently that accounts for a lot of the neuroscience data.  It turns out that the mechanisms that we’ve uncovered are surprisingly simple at least conceptually and that there are really two parts to the process.  And the first is forming predictions which in the simplest sense, the more strongly you predict a particular outcome, the more cell activity you have associated with that prediction.  And then when it comes to detecting surprises, really, all we had to do was to take the difference, that is the subtraction between the predictions that were represented in the actual outcomes.  And so, whenever you have a mismatch, then that difference is larger.  And that larger difference constitutes this prediction error signal or the surprise.  So, in a sense, what we found is that when we weighed through all these complexity and all these large corpus of neuroscience data, we can organise a lot of it with surprisingly simple principles.

Hannah -   And they set in human brain conditions like for example, Schizophrenia computer_chipwhere scientists have theorised that there's a difference between prediction and actual reality which those differences might lead on to the delusions and hallucinations that these patients experience.  So, can your computer model help to somehow help us to understand more about these patients’ conditions and maybe come up with treatments?

Joshua -   That’s a great question.  The first thing I can say is that, we have in fact done some of that.  So, a few years ago, we did a study of Schizophrenia and what we found is that when you look explicitly at how individuals with Schizophrenia form predictions and how they evaluate their own predictions, what we found is that they often get things backwards.  The things that are less likely to occur, we found that their brain activity showed stronger activity.  In other words, they were predicting more.  And the things that were more likely to occur, they seem to predict less.  Because those were sort of malfunctioning, that means that the outcome evaluations, that is the surprise that was detected was not really consistent with reality.  And so, in that sense, we traced some of the dysfunction portion of the problem in Schizophrenia, back to a problem with learning how to accurately predict the world and what will be the consequences of your actions and of other people’s actions.  In essence then, part of the problem is, individuals with Schizophrenia have great difficulty learning how the world works and how they work within the world.  We were able to see that at a neural level in a way that we can now account for with the same computational neural model.  We’ve also done other studies with drug addiction, looking at how our model can make sense of what's going on in the brains of individuals who are dependent on drugs.  What we find is again, that there are systematic differences in how people with drug addiction predict the consequences of their actions, the risks and also the rewards.  And there are others with obsessive compulsive disorder for example.  There seems to be an over prediction of outcomes and especially negative outcomes in way that’s not consistent with reality.  And so subjectively, people experiences kind of anxiety, the obsessions or the compulsions as if there was some bad consequence that was going to happen, and that leads to all sorts of behaviour that’s either unnecessary or best inconsistent with the needs of the actual environment.

Hannah -   So, your computational model has really taught us more about how humans behave and how we recognise particular errors that we might make in our own judgment.  I don’t know whether you’ve seen the film “Her” with Scarlett Johansson where she is an operating software queen who has lots of relationships with men.  She must have some intuition about how they value things and how they notice reward and recognise errors in order for her to communicate with them within this film.  Do you think that your computational model might help to develop software that’s similar to that?

Joshua -   I think if you look at how “Her” – whether we call it ‘her’, a computer or what, how that’s depicted in the film, there's a huge range of cognitive processes.  There are sort of processes related to empathy and social cognition and language.  In one sense, what we’ve done is to isolate a small part of cognition.  A small but I think important one.  Now, every model whether I build it or someone else builds it, every models by definition of simplification.  So, I'm certainly not going to claim that I have some model that’s approaching something of complexity of ‘her’ but I think we have sorted out a piece of it.  I think that with the kinds of things that we’ve developed and that other people have developed, I think it will be possible to put those pieces together into a larger functioning system.  I think down the road, it’s going to be increasingly possible to build systems that appear more human-like.  In fact, nowadays, if you make a phone call to a large corporation, you're likely to speak to a robot essentially, a computer.  Even in my lifetime, those processes have gotten a whole lot better.  They're getting better at recognising speech, better at responding in a way that sounds more natural or human-like.  Even recently, there's been talk of machines being able to pass the touring test and that is to fool people into thinking that the machines really are other people.  I think we are approaching that point.

Hannah -   That's all we have time for today unfortunately.  Thanks to Ray Dolan, Sleeping StudentHanneke den Ouden, and Joshua Brown.  I’ll be back again tomorrow to investigate the importance of sleep.  (snore)  So, have you ever been up all night partying and then crashed out completely the next day?  That’s your brain sleep bank getting out of the red and making up your lost sleep credit.  We’ll discover the sleep brain bank in fruit flies…

Gero -   Except if you don’t keep them awake all night partying, it’s more of a Zero Dark Thirty if you’ve seen that movie about torture in Iraq.  It’s more that approach for a sleep deprivation like the secret service would do.

Hannah -   Poor fruit flies and they didn’t even get the joy of a party to justify their sleep deprivation.  Well, I'm off to enjoy Milan but I’ll be back again tomorrow to wake up and open my mind.  My name is Hannah Critchlow and this is a special Naked Neuroscience episode from the Federation of European Neurosciences 2014 Forum reporting in Milan.



Subscribe Free

Related Content


Make a comment

No. The ubiquity of consciousness cannot be replicated into a computer: Sentience is a fundamental aspect of all living organisms. Computers works through machine-based algorithms (AI) which are not self-aware. Thus neuroscience could not develop a conscious entity by designing a machine. tkadm30, Mon, 14th Dec 2015

A similar question was discussed here

Colin2B, Mon, 14th Dec 2015

Computer models  are possible. If that model includes a model of the computer modelling it, then it is self-aware, to some degree , e.g. the computer could pass the mirror self-recognition test , if it had camera attached.

RD, Mon, 14th Dec 2015

i think with neural networked self learning systems it is possible Colin2B, Mon, 14th Dec 2015

See the whole discussion | Make a comment

Not working please enable javascript
Powered by UKfast
Genetics Society