Could AI deepfakes cause havoc in upcoming elections?

In a year with a record number of national elections...
12 January 2024

Interview with 

Sander van der Linden, University of Cambridge

ELECTION.png

Elective ballot

Share

This year, a record-breaking number of countries - 40 in fact - will hold national elections. The outcomes could potentially shape the course of global politics for decades to come. But there are fears that artificial Intelligences - which have already been used to disrupt ballots around the world - could be used to exert undue influences and bias the results. So, should we be concerned? Sander van der Linden is a professor of social psychology at the University of Cambridge and author of Foolproof: Why We Fall for Misinformation and How to Build Immunity.

Sander - One of the threats is really that AI is democratising the production of disinformation. So what I mean by that is, before large language models, individuals or groups of concerted people trying to make up stuff and then find vulnerable targets. But now with ChatGPT, anyone can create disinformation just from their own bedroom. It's actually really easy to do, and we've done it in our lab. People have done it on the internet, so that makes it easier to create this information, but also to automate it. So you may have heard of microtargeting, which is using your digital footprint, so the times you spend on Facebook clicking on things or the traces you leave behind on Google search, to try to find out information about you and then target you with ads, uh, because we know that it's more persuasive and people more likely to click on it. But you can do that with fake information too. Before troll farms, factories where people worked en masse to produce false information, we'd have to manually write individual messages and say, 'okay, well this person is a bit more extroverted, so we're going to write a fake extroverted message. This person's a bit more introverted, so we're going to write a fake introverted message.' Of course now with ChatGPT, it can create hundreds of messages. You just prompt it and say, I want to write a message for somebody who's extroverted or introverted, and it can do that at a great scale. And so it really eliminates the labour and so you can automate this process of micro-targeting. So it makes it easier for bad actors, not only to generate the disinformation, but target it at specific audiences.

Chris - So we have two problems, then we've got the fact that more people can do more bad things, and we can also make the stuff that's being made more targeted, more focused, and potentially therefore more effective at biasing people in the wrong way.

Sander - That's right. The immediate problem is that people are not good at identifying manipulated imagery and deep fakes. Up to 20 to 30% of political images are manipulated. Often people don't seem to know the difference. In tests, when you do studies, people do very poorly at identifying  deepfakes from genuine content. We also know that people deem AI generated disinformation as more persuasive than human written disinformation because the AI makes it more succinct, more easily accessible than human writing. And then lastly, I think the most convincing evidence that this is actually a problem is that there have been studies where they target people with a  deepfake of a politician saying something false on purpose to try to antagonise a voter base. So in this study, they're making a joke about religion, for example, to try to offend religious voters. This is totally false, but it looks like it's the politician and they actually found in a randomised experiment that it has impacted people's intentions to vote for this candidate and their policies.

Chris - So it sounds like there really is a potentially big problem with all of this. Is not the solution though, really the message that you have been putting out there all along, which is that if you pre-warn people what to look for, they're less likely to fall into these sorts of traps. I looked at the video footage of Zelenskyy, apparently suggesting that his countrymen should lay down their arms and surrender to Russia. It looked pretty plausible until I then read someone saying, 'this is what to look out for. That tells us this is fake.'

Sander - To some extent, yeah,. I think that AI does pose some unique challenges to this idea of pre-bunking that you mentioned, to try to preemptively build resilience. So in our lab, we do these sorts of quizzes on ourselves too. And supposedly we're trained experts. Sometimes we don't know if the image is manipulated or not. We had a computer scientist visit us who specialises in deep fakes, and it was really illuminating. So yeah, it is true that you can preemptively teach people some of the tricks, right? That there's flickering around the eyes or the hair is kind of floating or there's stuff in the background that shouldn't be there totally out of place. And that works for a period of time until the technology updates. And if you compare this to the viral analogy, once the virus mutates maybe we need to update the vaccine. And so that's the challenge with AI. With all the stuff that we've been doing before, it's kind of permanent. We give people the fingerprints of misinformation. We expose them to weakened doses and help them refute it in advance and then become more immune. But AI is changing at a faster rate than other types of misinformation. So that does pose a challenge. So we've been thinking about what are the stable features that would help people in advance? And one of those features is context. So these videos tend to be presented in contexts that are not really plausible. For example, would the Pope wear a puff jacket? Because it's hard from the rest of the image to know whether it's real or fake. Does this context make sense? And manipulators will try to use political context often to insert these videos.

Chris - Or is it that we as humans are just too flawed and fallible and weak-minded to see through the veil of AI, and in fact we need an AI to defend us from AI. Is that going to be how it works out? Our media that's presented to us, our phone will have, in the same way as it has antivirus on there, may have an AI filter that calls BS on stuff that doesn't really look plausible.

Sander - Well, you know, I think it's a good idea for people to have assistance, right? We're all operating on the conditions of limited time and resources. So sure, if you had endless time and you could Google everything, you may be able to figure it out, right? But in real life, we're massively constrained. And so the question becomes, can AI assist us? And yes, you can train AI to detect manipulation techniques online to automate it, to help you detect them. We also used AI to produce a test, which we call the misinformation susceptibility test, which is a quiz anyone can take to test themselves. But the headlines were actually generated by an AI. And so AI can also assist in education. So I would think of it as a tool that we can harness to assist people in recognising and identifying misinformation.

Chris - Are you worried, do you think this is likely to skew the results of these elections this year and next?

Sander - Yeah, that's the big question. What worries me is that there's very little regulation that actually prevents political actors from using AIs in deceptive ways in their campaigns. And that's true I think both in the US and, maybe to a somewhat lesser but still to some degree, in the UK. And they have been used, politicians have used deepfakes of their opponents and people have fallen for them. It's kind of mind boggling that regulation is lagging to such a degree that we actually don't know how to legislate or regulate the use of deepfakes during elections. Because ads are allowed, right? And if you use deepfakes for ads. But it becomes easy to subtly manipulate people. So you can run an ad with the assistance of AI, but then it can ad lib and produce things that aren't there, can simulate situations that are not maybe outrightly false, but clearly manipulative and where's the line? And so I think for people, what's happening is that reality is getting blurred. People are left to their own devices to figure it out in the absence of any type of regulation. And that does worry me that people are going to be influenced by this technology. So I would say that AI assisted disinformation is coming to an election near you.

Comments

Add a comment