Sentient AI: can machines ever be like man?

Claims that an artificially intelligent chatbot was sentient sparked uproar. Are "alive" algorithms realistic...
18 July 2022

Interview with 

Toby Walsh, University of New South Wales

Share

To the world of artificial intelligence or AI now; Google engineer Blake Lemoine recently captured the world’s attention when he went public with his conviction that the chatbot he was working with,  the“Language Model for Dialogue Applications or LaMDA for short, had displayed evidence of having its own feelings and consciousness. Since speaking to the press, Lemoine was put on Google gardening leave, and the company have roundly dismissed his claims. So is this a computer programme with feelings that knows its alive and fears the off switch, or just a clever piece of software? James Tytko asked Toby Walsh, professor of artificial intelligence at the School of Computer Science and Engineering at the University of New South Wales, whether LaMDA could really be sentient…

Toby - No, it's not really sentient, not in my opinion. We don't build machines with anything like sentience. On a strictly technical level, it's not sentient because it's not made of biological stuff and only biological things have consciousness have sentience. But what I think is interesting is how easily even smart people like a senior Google engineer can be taken in. So it says more about human gullibility than it does about intelligence of machines.

James - If it isn't sentient, how is Lambda so good at replicating the speech of a real person?

Toby - Well, it really is auto complete on steroids. It is essentially the same sort of thing that's on your smartphone that can finish the word or maybe finish the sentence when you're typing in a text or an email, but they've taken it to the next level by pretty much pouring in the contents of the internet into this large neural network. So it can't just complete the next word or the next sentence, it can complete whole paragraphs. But it's not understanding what it's saying. It's merely just trying to say things that will frequently turn up on the web.

James - But is it a concern that without him going public with this belief, we would not have known that a chatbot as sophisticated as this was in development at one of the biggest technology companies in the world, Google. Why are they interested in developing chatbots that we find difficult to differentiate from real people?

Toby - Well, there is an arms race going on between the big tech companies to develop these chat bots. And Google's not the only one, that's got a large chat bot; all the other tech giants are developing them. There's nothing special about what Google's done, it's their flavour of chatbot. And they're going to prove very useful. Google is going to be using it to help return better, more accurate search results for you. They're being used in customer service, they're being used in a variety of ways. They can do some remarkable things. They can summarise restaurant reviews for you. They can even write computer code; you just say what you would like the computer code to do and because there's quite a bit of computer code out there on the web that's been poured into this program, it can surprisingly enough actually write computer code.

James - I'm interested in the idea of sentience and AI. And I'm wondering whether, is it even in the realm of science, the idea of sentience, or can we ever expect scientists to understand sentience, to then be able to give it to machines?

Toby - We don't know if it's the stuff purely of biology or it's something that we could reproduce in machines. I always say it will be lucky if machines are never sentient, because then we won't have to worry about them. I can go back to my laboratory and I can take my robot apart, diode by diode, and no one's going to care because it doesn't have any feelings. It's not going to experience any pain. But if it did become sentient, then most things that have sentience, we give rights to like other humans and animals.

James - You've recently released a new book, "Machines Behaving Badly." Can you give me and our listeners a bit of a taste of your rough direction of travel in that new work?

Toby - Yeah. I don't think we have to get into some super intelligent machines - some Terminator-like robot - before we get to have to worry about these things. Indeed, I think we're already there. We already have to worry about the stupid algorithms that are starting to be given responsibilities that are that impacting upon our lives. As an example, the machine learning algorithm used by Facebook to decide your news feed or the machine learning algorithm that's used by Twitter to decide which tweets you read, that doesn't seem to be particularly well-aligned with human values. It's encouraging polarising of our debate, it's encouraging fake news, it's not encouraging a healthy, democratic debate, it's not encouraging us to understand other people's viewpoints.

James - Are there other implementations of AI currently in use in our society that similarly crop up as a point of concern for you?

Toby - Yes, for example, many people don't realise that their boss today is an algorithm. If you work for Uber or Deliveroo or one of these gig economy companies, your work is not decided by a human, it's decided by an algorithm. And it's demonstrated for example, that the algorithm is encouraging bad driving. It's encouraging people to break the speed limit, you'll get more jobs and more money. There's going to be more work for those people.

Comments

Add a comment