Can AI help to end fake news?

The same systems that are fanning the flames of fake news can also help to combat it...
09 May 2019

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE

Share

Fake news has already fanned the flames of distrust towards media, politics and established institutions around the world. And while new technologies like artificial intelligence (AI) might make things even worse, it can also be used to combat misinformation.

Want to make yourself sound like Obama? In the past, that might have required physically imitating his voice, party-trick style. And even if you were very good at it, it almost certainly wouldn’t present a danger to our democracy. But technology has changed that. You can now easily and accurately make anyone say anything through AI. Just use the service of an online program to record a sentence and listen to what you said in a famous person's voice.

Programs like this are often called deep fakes - AI systems that adapt audio, pictures and videos to make people say and do things they never did.

These technologies could launch a new era of fake news and online misinformation. In 2017, Hany Farid, a computer scientist at Dartmouth College,US, who detects fake videos said the rapid proliferation of new manipulation techniques has led to an ‘arms race’. Just imagine what elections will be like when we’re no longer able to trust video and audio. But some researchers are now fighting back and showing that AI can also be used for good.

‘AI has many ethical problems,’ said Francesco Nucci, applications research director at the Engineering Group, based in Italy. ‘But sometimes it can also be the solution. You can use AI in unethical ways to for example make and spread fake news, but you can also use it to do good, for example, to combat misinformation.’

Fact-checkers

He is the principal researcher on the Fandango project, which aims to do just that. The team is building software tools to help journalists and fact-checkers detect and fight fake news, says Nucci. They hope to serve journalists in three ways.

The first component is what Nucci calls content-independent detection by using tools which target the form of the content.

Nucci explains that today, images and video can easily be manipulated, whether through simple Photoshop or more complex techniques like deep fakes. Fandango’s systems can reverse-engineer those changes, and use algorithms to help journalists spot manipulated content.

As these tools look at form, they don’t check whether the content itself makes false claims, which is what Fandango’s second line of research does. Here they link stories that have been proven false by human fact-checkers, and look for online pages or social media posts with similar words and claims.

‘The tools can spot which fake news stories share the same root and allow journalists to investigate them,’ said Nucci.

Both of these components strongly rely on various AI algorithms, like the processing of natural language. The third component allows journalists to respond to fake news.

A fake story might, for example, make the claim that a very high percentage of crimes in a European country are committed by foreign immigrants. In theory that might be an easy claim to disprove because of large troves of available open data, yet journalists waste valuable time in finding that data. So Fandango’s tool links all kinds of European open data sources together, and bundles and visualises it. Journalists can use, for example, pooled together national data to address claims about crimes or apply data from the European Copernicus satellites to climate change debates.

‘This way journalists can quickly respond to fake stories and not waste any time,’ said Nucci.

Their tools are currently being tested by Belgian public broadcaster VRT, ANSA, the main Italian news agency, and CIVIO, a Spanish non-profit organisation.

Fake news detection

Yet spotting fake news might not only be a question of finding untrue claims, but also of analysing massive amounts of social media sharing patterns, says Michael Bronstein, professor at the University of Lugano in Switzerland and at Imperial College London, the UK.

He leads a project called GoodNews, which uses AI to take an atypical approach to fake news detection.

‘Most existing approaches look at the content,’ said Prof. Bronstein. ‘They analyse semantic features that are characteristic of fake news. Which works to a certain degree, but runs into all kinds of problems.

‘There are, for example, language barriers, platforms like WhatsApp don’t give you access to the content because it’s encrypted and in many cases fake news might be an image, which is harder to analyse using techniques like natural language processing.’

So Prof. Bronstein and his team turned this model on its head, looking instead at how fake news spreads.

Essentially, previous studies show that fake news stories are shared online in different ways from real news stories, says Prof. Bronstein. Fake news might have far more shares than likes on Facebook, while regular posts tend to have more likes than they have shares. By spotting patterns like these, GoodNews attaches a credibility score to a news item.

The team has built their first prototype, which uses graph-based machine-learning, an AI-technique in which Prof. Bronstein is an expert. The prototype is trained on data from Twitter where the researchers trace stories fact-checked by journalists and shown to be false. Journalists in this way train the AI-algorithm by showing it which stories are fake, and which are not.

The GoodNews team hopes to monetise this service through a start-up called Fabula AI, based in London. While they hope to roll out the product at the end of the year, they envisage having customers such as large media companies like Facebook and Twitter, but also individual users.

‘Our bigger vision is that we want to become a credibility rating house for news, in the same way that certain companies rate a person's consumer credit score,’ said Prof. Bronstein.

Solve

Of course that leaves a bigger question - can technology really solve fake news? Both researchers are sceptical, but convinced technology can help. Nucci emphasises that the concept of fake news is contested, and that stories are often not entirely true, but also not entirely false.

‘Fake news is not a mathematical question of algorithms and data,’ he said. ‘But a very philosophical question of how we deal with the truth. Nevertheless our technology can help improve transparency around fake claims and misinformation.’

Prof. Bronstein says it would be naive to expect technology to solve the problem of fake news.

‘It's not just about detecting fake news. It’s also a problem of trust and a lack of critical thinking. People are losing trust in traditional media and institutions, and that's not something that can be mitigated only through technology,’ he said.

‘It requires efforts from all stakeholders, and hopefully our project can play a part in this larger effort.’

Comments

Figuring out what the fake news is might be the first step. LOL. You all haven't even figured that out yet. When you get to that first step, then you can figure out how to use and AI to end it. Hint, the reason we have fake news is because the perpetrators of fake news are intentionally lying. Not sure how AI is going to end people deciding to lie. More likely, the perpetrators of fake news will simply program the AI to call real news fake on the theory that people will be more willing to trust the AI because it's an AI than human beings lying to them.

"Fake news " can also be an indication of an interpretation that another person or faculty have chosen to disagree or agree with! As long as people are free to comment, fake news as you call it will lose power solely because it is fake but "fake news" that the rest of us are not allowed to see is a forced and dangerous control.

Add a comment