Is Twitter now a more dangerous place?

10 March 2023



BBC Panorama has been told by Twitter insiders that the company is no longer able to protect its users from trolling and state coordinated disinformation. Allegedly, that's because of layoffs that have been brought in by Elon Musk who now owns that company. Why is Twitter and other social media potentially endangering our democracy? What's the potential threat there?


Stefanie - It doesn't really surprise me very much to be honest, to see these kinds of results, because it mostly shows to me, or it proves the point, that unfortunately a lot of detection work on harmful content, disinformation, hate speech is still done by humans. So of course, if you let them go, the algorithm can only do the work to a certain extent.

Chris - I get you. So by reducing the size of the workforce who were scrutinising what was going through the platform, there are fewer filters in the way?

Stefanie - Exactly. A lot of the things that are taken out, fortunately we will never see a lot of content like terrorist content, child pornographic content. There are literally human beings who are looking at this for hours every day and filtering it. Because a lot of algorithms are still struggling with doing the detection work and humans are still much better at this.

Chris - The problem is though that computers are not very good at doing it, isn't it? Because people are saying, we need to substitute computers, get them to scrutinise it, and then we fall victim to being declared doing something nefarious when we are not. There was a wildlife organisation, a charity, that put out some tweets about woodcocks - that's a bird. And they were very interested in raising awareness and unfortunately they fell victim to Twitter. And it was ironic that it was a bird charity that fell victim to Twitter's filter. I guess that's why people like you are trying to solve this, isn't it, to try and make more intelligent filters?

Stefanie - I'm not entirely sure whether artificial intelligence will ever fully be a solution for this. It just shows how nuanced and how contextual language is. So, of course, a human being would recognise this as a charity by the background knowledge that we have, contextual knowledge. But of course an algorithm does not have that.


Add a comment