The joking robot

Can a computer learn to crack jokes?
17 January 2017

Interview with 

Professor Graeme Ritchie, University of Aberdeen

Share

What kind of tree is nauseated?
A sick-amore!

Not bad for a cracker joke, but impressively, this was actually written by a computer programme! Its creator, Professor Graeme Ritchie spoke to Chris Smith about the challenges involved in getting computers to understand jokes.

Graeme - We started this in the early ‘90s and, at that stage, artificial intelligence was trying to model virtually every aspect of human behaviour, except for some of the more emotional and creative facets of human life. Nobody had looked at humour or jokes and we figured that AI was a good way to model human behaviour and to understand what was going on.

So we started at what we thought was the shallow end with these very simple punning jokes, the kind of things you’ve just been listening to, because we thought they had some structure to them, we could see some simple patterns in them, and that was just a first step along the road. So we weren’t really trying to model a sense of humour, which is a much more subtle thing, we were just trying to figure out what the shape of jokes they are and how we can write rules that will describe those jokes. And we say this as one step on a very long road towards getting a better understanding of how jokes work.

Chris - They are actually quite clever in the sense that when you see them like :

What do you call an enemy image? That’s another one of your computer’s jokes.
A foe-to.

I mean, they are quite clever but they’re not that funny. What theories do we have of what makes something funny?

Graeme - There's a bit of a lack on the theoretical side. People have been writing about humour for centuries and it seems, on the face of it, we have quite a lot of theories of humour - several. But when you examine them very closely, they’re not what an actual scientist would call a theory, they tend to be very broad opinions about things that go on in humour.

Chris - Can you reverse the equation Graeme? And you’ve got a computer programme that will generate things like puns that have a potential to make us laugh. Didn’t work on Sophie, but you know, we’re working on that. Can you turn it round and feed something into your computer programme and it would know if I was joking? Because, if said certain things to you, certain word orders or certain manner of speaking, you’d know I was punning at you. Can your computer do that?

Graeme - Not at the moment. And that’s actually quite a difficult problem because the whole range of what you could, in principle, feed in is so vast that it’s quite a challenging task to figure out whether it’s a joke or not because there could be so many different ways of making a joke. When you’re generating, you have the data under control. You’re experimenting with exactly the area of humour that you want to look at and that’s all you’re generating. So you can narrow down the focus to a particular genre of humour. When you’re accepting input then it's much more difficult. It would just be an act of programming to write something which takes the jokes of the kind we generate and recognises exactly those kind of jokes and no other. We could do that but it wouldn’t be very interesting because all we would be doing is just reversing the process. And if you fed it anything other than exactly that kind of joke, it would just say no, even if was a very funny joke of some other kind.

Chris - It’s pretty important, isn’t it? Because if we seek to use these sorts of systems in the future engaging with people whether it’s the ATM machine for you getting money out or a telephone answer system or something. People are human and they do have humour and humour is a very important part of our social interactions. That’s what Sophie was saying. And if we don’t have systems that are capable of understanding and modeling it then we’re not going to enjoy the engagement with models like you're creating.

Graeme - Well, that’s true and a lot of people have argued that if we’re going to have avatars on our computer system, or our phones, or our tablets that interact with us in avery natural lifelike way, they’re going to have to have the equivalent of a sense of humour because they’re going to have to pick up if the user is being lighthearted or just making a joke. They’re going to have to recognise that. And there’s also an argument that says that maybe the intelligent agent on the device should maybe lighten up it’s own interactions with the occasional joke but that’s a bit more risky.

Comments

Add a comment