Fake news, bots and extremist content have flooded social media, and our trust in media has declined as a result. 

A 2018 Knight-Gallup poll found that while more than 80 percent of U.S. adults believe trustworthy news media play a vital role in democracy, only 44 percent could name a single news source they believe reports objectively. 

On a scale of 0–100, the average American scored a 37 when it came to trust in the media. For conservative Republicans, the average score was even lower at 18. 

Most Americans agree that something needs to be done about fake news. Many also assume that the technology exists to weed out fake news, but that news and social media sites are refusing to put these tools to use. 

This assumption, however, isn’t quite accurate. Eliminating harmful content with technology has turned out to be a taller order than anyone expected. 

For now, at least, responsibility to spot and address fake news lies with the reader. 

What’s So Hard About Weeding Out Fake News?

In 2016, Facebook announced that artificial intelligence could be used to eliminate fake news, depictions of violence and other harmful content from the site — if the company could determine how to apply the technology responsibly. 

That’s a big “if,” according to Facebook’s leadership. 

“What’s the trade-off between filtering and censorship? Freedom of experience and decency?” Yann LeCun, Facebook’s chief AI scientist, said in 2016. The problem, according to LeCun, wasn’t in developing the technology itself. It was in setting the technology’s boundaries — this is acceptable, this is not — in an ethical fashion. 

Four years later, however, the technology poses a bigger problem than we thought. Joe Uchill at Axios has the story about one MIT researcher, Tal Schuster, who has proved empirically, time and time again, that machine learning struggles to flag false news. 

In one study, Schuster and team tried to teach an AI to spot whether a text had been generated by a human or by a computer program. The AI did fairly well in discerning the difference. But its skill wasn’t transferable to the real world, where the question “Did a human write this article?” is no guarantee that the article’s content reports actual events.

In a second study, Schuster and colleagues tried to teach an AI to identify whether statements were false or true. Here, however, the AI found a shortcut: It discovered it could maximize its chances of being correct if it simply flagged every negatively phrased sentence as “false.”

Using artificial intelligence to solve the fake news problem is difficult because AI is bad at large, general problems. Currently, AI performs best when it is given a narrow, defined problem, such as beating a human at chess or generating predictive-text suggestions. The larger or more general the problem, the more likely the AI is to take shortcuts. 

While it may be possible to connect a number of narrowly focused AIs to reach a better general result, we’re not there yet. Currently, humans can produce fake news much more quickly and more effectively than computers can identify it. 

And many social platforms’ attempts to target fake accounts aren’t working, either. In one study, Facebook missed 95 percent of fake accounts even when users reported the accounts as fake, NATO Strategic Communications experts Sebastian Bay and Rolf Fredheim write in a report for  Singularex.

Why No One Has Been Able to Stamp Out Fake News With Better Technology

Free Speech and the Fake News Fight on Social Media

Even if the tech tools existed to automate the fake news fight, how would we apply them in a way that targeted falsehoods while preserving opinions and debate? The question is particularly relevant to U.S. audiences, where the right to free speech on political topics is codified in the First Amendment. 

Often, fake news uses many of the same keywords as legitimate news articles or debates on political and social topics. Simply targeting individual keywords or phrases, then, risks sweeping up opinion and debate along with false statements of fact.

Yet the problem also lies in the generation of fake opinions. For instance, in a 2019 study published in Technology Science, researcher Max Weiss generated 1,000 deepfake bot comments in response to a call for comment published by the Centers for Medicare and Medicaid Services (CMS). All 1,000 comments were unique and focused on specific policy positions. In fact, they were real enough to convince Medicaid.gov’s administrators to accept them.

While Weiss did reveal the nature of the experiment and ask Medicaid.gov to remove the generated comments so as not to influence public debate, the experiment demonstrates the profound difficulty of targeting fake sources of political opinion and commentary — or even of identifying them as fake in the first place. Where do we draw the line when it comes to shaping online conversation ethically? 

Another question of ethics arises when we rely on humans to screen out the most harmful content. Several reports featuring people who have worked as Facebook content moderators, for instance, have focused on the harmful effects of the work. The effects include cases of post-traumatic stress disorder (PTSD) caused by reviewing content that included hate speech, animal cruelty and murder, as Casey Newton at The Verge details in a fantastic piece of reporting.

Humans are currently capable of screening out some, though not all, fake or harmful content in a way computers are not. The impacts of such screenings on public discourse, speech rights and human health, however, demand further consideration. 

Woman working at home office hand on keyboard close up. Why No One Has Been Able to Stamp Out Fake News With Better Technology

DIY Tools for Fighting Fake News

To date, organizations that try to fight fake news online typically do so in one of two ways, Rani Molla at Vox writes. Either they focus on educating readers to spot and avoid fake news, or they seek to improve the trustworthiness of sources by eliminating fake ones or by rating each source’s veracity. 

While online tools for spotting fake news can help, learning to read and engage critically is the best defense for an individual seeking to protect themselves and their social circles from fake news online.

Learning to Avoid Fake News

A 2017 study by Stanford University researchers Sam Wineburg and Sarah McGrew compared the ways three groups of people read websites and evaluated their contents: Ph.D. holders in history, undergraduate students and professional fact-checkers. The researchers found that the historians and undergrads struggled to identify some fake news sources, typically because they did not compare a website to other sources when evaluating credibility. 

By contrast, the professional fact-checkers would compare information on a website to several other sources before coming to a determination about credibility. Perhaps unsurprisingly, the fact-checkers were better at identifying fake news sources than the students or historians. 

The lesson: A single website cannot be trusted as a source about that website’s credibility. 

Instead of attempting to judge the quality of a site’s information from its professional-looking layout, logos or URL, search for sources that repeat substantially the same information. If the site itself links to sources, skim them to determine whether the information in the sources actually supports the claims made in the original piece. 

Reading critically is also essential to protecting one’s own ability to make decisions consistent with one’s own moral and ethical values. A study by Gillian Murphy and fellow researchers published in the Psychological Review found that fake news stories could be remembered as truthful, especially when they confirmed a reader’s existing political or ethical opinions. 

As a result, voters could be swayed toward one candidate or stance, or away from another, based on stories that were simply untrue. By comparing multiple sources and thinking carefully about their content, readers can develop a broader view of an issue and avoid basing decisions on falsehoods.

Saying No to Shares

Finally, social media users can decline to be part of the problem. 

A 2018 study by MIT researchers Soroush Vosoughi, Deb Roy and Sinan Aral found that fake news stories traveled six times faster than factual stories, and that they reached up to 100 times as many people. 

One reason these messages travel further and faster is social media algorithms promote content based on its popularity. “Is this getting attention?” is the sort of narrow problem an AI can easily solve — and social media sites use that capability to keep people engaged. 

“They way they keep people clicking and sharing and commenting is prioritizing things that get your heart pumping,” Andrew Marantz, author of the book “Antisocial: Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation,” tells Boulder Weekly.

“It’s like stocking a huge grocery store, but all of the visible aisles are Oreos and rat poison.” 

One of the best tools to combat fake news, then, might also be the simplest: Don’t share a link or post unless you’re confident about its source. Screenshots can offer a way to discuss a headline or social media post without alerting the site’s AI that the content is interesting and therefore promotable. 

Images by: Aleksandr Davydov/©123RF.com, Josef Kubeš/©123RF.com, undrey/©123RF.com