Can social media influence us?

Can Facebook and Twitter influence our moods, our beliefs or even an election?
19 December 2017

Interview with 

Sander van der Linden, Cambridge University


A mobile phone sitting on a computer keyboard


Can social media affect the way we think? Are social media bubbles - and we’ll explain what those are in a minute - influencing elections for example, and can facebook even influence our mood? Cambridge University’s Sander van der Linden looks into this very thing, and he explained to Chris Smith just how pervasive these sites are...

Sander - I would say that the scale is massive. Around the world globally, about 2 billion people are using social media. And perhaps what’s even more interesting is that more and more people, significant majorities, are getting their news from social media websites. This is particularly true for young adults, but also at increasing rates for older people. So I think perhaps the real testament to your question is this: if you now ask someone do you have a Facebook account  and they say no, they’re probably going to be the odd one out.

Chris - In what way might this influence politics, the way news is spread and so on? How can this change what we’ve been doing for centuries?

Sander - I think the new media environment is definitely changing the way that people will engage with information. I think the big challenge is trying to understand how? So, for example, if you’re asking is our echo chambers and filter bubbles and things like that influencing political behaviour like voting? I think it’s quite a complicated question.

Chris - What is a filter bubble?

Sander - So an echo chamber is: imagine sticking your head in a chamber and everybody is absolutely loving what you’re saying and reverberating you’re right. This is the right opinion to have, this is the idea of an echo chamber. Perhaps it’s created by filter bubbles which are based on Facebook’s, and other social media, algorithms that tailor content based on your prior click behaviour. When you like something, when you read something, that goes into the algorithm that then selectively targets you with what they call “ideological consistent content.” So political content that you’ve looked at before, and that sort of deprives you from other stories and stories that are going on the other side, and the other spectrum and so that’s what they call a filter bubble.

Chris - It feeds me what I want to hear and sort of cushions and buffers me away from people, views, perspectives that I might not be aligned with, so I then end up being perhaps fooled into the idea that everyone agrees with me?

Sander - That’s exactly correct.

Chris - In what way might that then influence an election?

Sander - People say that it did. The trick is how do you quantify that in academic terms and I think there’s two problems. One: some studies suggest that perhaps people weren’t exposed to as much fake news as we have previously imagined and so the link to actually mobilising people to vote is difficult to make in causal terms. But then other studies show that, in fact, and this is the real power of social media it’s often not the message itself, but the sharing of friends and friends, and friends of your friends that influence you and actually propel you to have a certain opinion in order to vote one way or the other. I do think that it has an influence, but the problem is trying to quantify exactly how much it’s influencing public debate and voting behaviour. But I do think there’s reason to be concerned about it.

Chris - And this whole question about it might influence your mood because you’re being fed certain thing but not others, in what way could that work?

Sander - There are some clever experiments and, in fact, some of these were done by Facebook employees, where they actually manipulate the newsfeed. What they would do, for example, is decrease the amount of positive content in your newsfeed or decrease the amount of negative content. By ‘positive’ or’ negative’ I mean emotional content. And what they find is when they decrease the amount of positive content you feel less positive, and when they increase the amount of negative content you feel more negative. In a way, they can actually regulate how you feel and respond to content and I think that’s actually quite important and something to think about.

Chris - That sounds quite dangerous?

Sander - It is, and all the studies show that it works the same with targeting people with messages. People don’t know this but you are being targeted online with messages based on your click behaviour, and the aim of those messages is to get you to do something, click or something. Experiments have shown that just simply targeting messages based on the users behaviour works in a very concrete way in that they can actually mobilise people to click on something. As you said, that can be quite dangerous depending on, of course, the purpose for which it is used, for example, influencing elections.

Chris - But could it be used for good as well? Presumably there’s some good in this, if we can do bad things with it we can equally well do good things?

Sander - That’s absolutely right. Now we have the ability to mobilise millions of interconnected people for things that we might value in society. For example, the ALS ice bucket challenge was an example of that. That was an unprecedented viral campaign where million of people around the world donated to a particular cause. It's, in fact, a very low cost way to connect people, to organise people independently, for example, to supply access to information or internet in parts of the world where they’re deprived of certain resources. I think people have the capacity to do amazing things online that previously wasn’t possible.

Chris - Do we know whether or not something stand a good chance of doing very well on social media or is there like a formula that you could tell us where you say: well look, if you do your tweets in this way or you craft your campaign like this, you’ll get a much greater chance of succeeding?

Sander - Yeah. Some of the research I looked at, well what makes a campaign or social cause go viral? We looked at many, many different campaigns and one of the things we distilled was that there is almost like a formula, and we used the acronym “SMART.” It stands for social influence, moral imperative, affective reactions and translational impacts.

Very briefly, social influence means campaigns are inherently social so that they have an element that allows people to connect together and join the bandwagon.

Moral imperative means that the campaign is about something that people care about; maybe social injustice.

Affective reactions: there’s emotions. Things that make people feel good or bad that tend to go viral more often.

And then the last aspect is really about sustaining that viral impact because when you look at most things that go viral, what comes up eventually must come down.

Chris - I’m not after a cheeky free consult or anything here Sander, but we do have our annual fundraiser coming up. So can you give us some tips, me and Georgia, so that when we run the next Naked Scientist fundraiser we have got an absolute dynamite campaign?

Sander - I'll do my best. The first aspect would be social influence. In a way you want to illustrate how many good samaritans are already contributing so that people can see how much is being contributed to the fundraiser. So that’s good, that provides social proof. Then you want to show that it’s a good cause in a way to make that contribution.

Chris - Hopefully, that’s obvious. I mean it’s the Naked Scientists.

Sander - Effective reaction, so you want to make people feel good about their contribution. They are contributing to a good cause and feel good about it that, hopefully, then - and this is where you hope to cross the T - that they’ll come back next year and donate again.


Add a comment