Videos immunise viewers against fake news

Short animations played as YouTube Ads helped viewers detect common misinformation tactics
26 August 2022

Interview with 

Jon Roozenbeek, University of Cambridge


In the digital age, misinformation and fake news have become major obstacles. Dodgy stories can spread like wildfire: drugs for treating Covid, 5G and vaccines have all been recent victims of misinformation campaigns propagated online. So how can we counter the threat? According to Cambridge University’s Jon Roozenbeek, who’s just published a study on this, the answer is to show people how to recognise the tell-tale signs of a social media porky, as he told Julia Ravey...

Jon - The way we've gone about doing that is basically by creating a couple of very short videos of about minute and a half long, and each of these videos shows you a way in which people might be manipulated online. For example, by playing into your emotions, right? Like if you write a headline that is intensely emotional, seeks to evoke fear, anger, outrage, and so on, it diverts your attention away from the accuracy of that headline in some way. But if you can point that out, for example, in a video, people actually become less susceptible to that kind of manipulation.

Julia - And with these videos, where did you play them?

Jon - So we did a bunch of lab studies first to see if there was a proof of concept here. Because if you don't do lab studies first, then you can't really do anything else. That turned out to work really well and then after that, we went to YouTube. So we ran these videos as YouTube ads - about 5 million people could have seen this video or one of these videos as a YouTube ad. And then after that, we asked these people a single question. We gave them a headline and asked "Do you think any kind of manipulation is being used in this headline? If so, please identify it." And they got a number of response options, a couple of options to choose from and they could be wrong or correct. We had a control group as well, which didn't see any of the videos, but they did get the survey question. And what we wanted to see was are the people who watched the video better at this than the control group? And they were.

Julia - With the videos being on YouTube. I know YouTube ads, a lot of people skip YouTube ads. Is there a way that if you roll these out to protect people against misinformation online, that we could definitely make sure people are watching these ads, taking them in and not just skipping over them?

Jon - The response rate was about 19% approximately, meaning of all of the people who were shown the ad about 19% actually watched them for a meaningful period of time. That was quite nice. And the way that we've tried to do that was by making the videos a bit more fun than usual, I suppose, like trying not to be boring, which scientists have a habit of doing. For example, in every video, we used an example from pop culture; Star Wars, Family Guy, South Park, and so on, to explain how these manipulation techniques might be used. For instance, there's a scene in Star Wars -  Episode III, I believe, where Anakin Skywalker, who's just about to the bad side, tells Obi Wan Kenobi, "If you're not with me, then you're my enemy, right?" Which is a telltale example of a false dichotomy, which is one of the techniques that we wanted to train people to recognize, which is when you're presented with two options when in reality there's actually more. You can be critical of Ankin, but also not be his enemy. There's a third way there that's also a possibility.

Julia - That's so interesting. So what were the other things you were trying to teach people with these videos? So we have false dichotomy. What were a few of the other things that you were trying to point out?

Jon - An other one was emotional manipulation, the one I just explained. Then there was incoherence, which is mutually exclusive arguments. So for example, in climate change, you get people saying it's not global warming, it's global cooling that's happening. And at the same time, they'll say climate models are bad. We can't predict what will happen to the climate. Like these two arguments rule each other out, you can't use them at the same time. The fourth one was Ad hominem attacks; attacking the person, not the argument, which sometimes makes sense. For example, if a tobacco company says "vaping is safe", you're like, well, considering your track record, I'm a bit skeptical. But nonetheless, in other cases, this actually is very manipulative and it isn't something that you would expect a reasonable debater to do. And then the fifth one was scapegoating. So that happens quite often with groups of people who are being held responsible for a very complex problem that has multiple causes, but singling out one group or one person is a very commonly used manipulation tactic in hate speech.

Julia - And from the results of your study, how effective do you think this technique could be if it was rolled out more widely to immunise people against misinformation online?

Jon - The study was unique in the sense that it demonstrated the actual scalability of this approach on social media. We ran an anti misinformation campaign on YouTube, as you actually would, there was no little to no daylight between what we did and what someone who would actually run an anti misinformation campaign on YouTube would do. It's pretty much the same thing. So that demonstrates the scalability, which is great. At the same time, that doesn't mean that we solved the problem. For example, we weren't able to look at what happens with how people behave online. So we don't know if, for instance, if they watch the video about emotional manipulation, if they then also start sharing less negative emotional content with each other. So that's a subject for future research.


Add a comment