Could computers crack human emotions?

Are emotions a key part of what defines us as individuals, and subjective and conscious beings? And if so, could computers crack them?
19 September 2013

Interview with 

Prof Peter Robinson, Cambridge University

Share

Could computers ever have the power to predict our emotions and change them? To find out I caught up with Professor Peter Robinson, Computer scientist at Cambridge University.

Peter -   It's a crucial part of human communications.  It's being studied scientifically since Darwin's time.  He was interested in the way that we use facial expressions to convey these signals.  People who can't do it are at a social disadvantage.  It's people with autism spectrum conditions.  So, in that sense, computers are autistic.  They don't recognise these signals.  So, the computer carries on blithely saying whatever it wants to say or doing whatever it wants to do, and it doesn't look at the expression on your face, or the tone of your voice as you interact with it.  And so, we've been looking at ways that we can give computers some sort of emotional awareness.

Hannah -   Building clever algorithms that can actually read emotion?  That's barcodequite a feat!  It must be very complicated maths that you're plugging in there and it must have been a huge amount of emotion reading that you must have had to sift through in order to build the algorithm in the first place?

Peter -   That's right.  Well, right on both those accounts, we've worked with our friends in the Autism Research Centre, Simon Baron-Cohen and he's interested in the real people who have autism spectrum conditions.  And so, we've joined a lot on his research work into the theory of emotions.  On top of that, we've used all the usual sorts of things that computer scientists use nowadays.  Essentially, this comes down to machine learning rather than actually writing a programme that tabulates exactly how to interpret these social signals that people are giving out.  We write systems that learn from examples.  We've had lots of video clips and audio clips of actors expressing emotions.  We can use those to train our systems.  In this context, we know the probability that you'll particular facial expressions.  If you're feeling a particular emotion, we can calculate those and this allows us to turn it the other way around, so that when we see the facial expressions, we can work out the probability of different mental states.  One of the projects is looking at ways in which we can make computer games that might be able to help children say, with Asperger syndrome who are often very intelligent; they just lack the ability to read these emotions.  They know they have a problem.  They want to do something about it.  We can make computer games that help them learn to read these expressions in other people.

Hannah -   And so, how sensitive or how accurate is this computer at reading and gauging people's emotions?

Peter -   Also, an interesting question.  The sensitivity is fairly good.  Different people have different magnitude with which they express emotion.  Different cultures are more expressive or not.  But the accuracy, well, it's not like the sort of computer system that you're used to - your spreadsheet.  This does not give you a precise, accurate, precise answer.  Emotions are things that different people perceive differently.  There's a consensus view.  But even when we show our video clips to a human audience, we're unlikely to get them all agreeing exactly on what they're seeing.  So, if we give them say, 6 choicesx we get maybe a 70% agreement rate.  But that turns up to be about the same sort of agreement accuracy that you'd get if you were showing it to people rather than the computer.  So, it's quite important when you're using this sort of information from a computer system - the sensors that are reading these social signals to understand that they're a hint rather than an absolute description.  That means that the way that the computers react to this information has got to be rather different and we haven't really had much time to even think about how we use this information, how we change the way the computer is operating in response to the emotions that it's detecting and its uses.  We did some trials particularly just in the problem of car drivers, dealing with busy roads, unfamiliar environments and increasing amounts of technology in the car.  We wanted to see if we could detect when a car driver was upset by the environment and perhaps adapt accordingly.  We set up some simulations for this. 

The thing that we observed first of all is that actually, most of the time in our driving simulator, people are just completely neutral.  They don't have to show any emotions and is rather deceiving.  So, there's then an even more difficult question of what to do in response to the information.  So again, to take the example of a car driver, if you see a car driver who is getting aroused and frustrated, and angry, actually, it's rather a bad idea to have a computer system that patronise and he tells him to calm down actually.  The computer has to mirror the emotion of the person that it's interacting with, but lower intensity - firm but not offensive.

Hannah -   Thanks to you Peter Robinson from Cambridge University. 

Comments

Add a comment