Request a Demo Blog

The weaponization of news: how NewsGuard is combating false narratives online

December 11, 2023 | UGC

In a post-truth age, misinformation, disinformation and compelling counterfactuals are the new currency. For some audiences, it doesn’t matter what is accurate and true, it’s what is perceived as ought to be true that counts.

Fake news feeds this confirmation bias. When false information is framed as authentic news content and circulated as a credible source, it has the capacity to undermine the integrity of all news content.

A Stanford study into ‘Social Media and Fake News in the 2016 Election’ defines fake news as “news articles that are intentionally and verifiably false, and could mislead readers.” It is distinct from inaccurate reporting in that it is deliberately misleading rather than a simple reporting error.

But just how harmful is fake news?

The weaponization of news: how NewsGuard is combating false narratives online

The Stanford study estimates that the average US adult read and remembered one or more fake news articles during the 2016 US presidential election. There was potentially a higher exposure to pro-Trump articles than pro-Clinton articles, it states.

Harvard Kennedy School’s Misinformation Review has identified a general decline in mainstream media trust across all levels of political ideology as exposure to fake news is increased. In a study carried out around the 2018 midterms, it estimated a 5% decrease in media trust due to misinformation exposure. It also found that fake news consumption was linked to lower political trust – although only for strong liberals.

Democracy and journalism aren’t the only areas where fake news is clouding the public conversation. It can inflame civil unrest, shape the narrative of wars and have implications for health too. A 2022 World Health Organisation review states that people “feel mental, social, political and/or economic distress due to misleading and false health-related content on social media during pandemics, health emergencies and humanitarian crises.”

Identifying and labeling instances of fake news is a vital step towards restoring trust. NewsGuard ratings are one option for doing this. The ‘Internet trust tool’ provides trust ratings for news and information websites that it says account for 95% of online engagement with news in the US, UK, Canada, France, Germany, Italy, Austria, Australia, and New Zealand.

Veena McCoole, VP of Communications and Marketing at NewsGuard explains how the company is tracking the weaponization of fake news: “Our team of global misinformation analysts is constantly monitoring the international news and information landscape, and have historically been among the first to publish information on new coordinated influence campaigns, state-sponsored disinformation activity, and more.

“With Russia’s war against Ukraine, we’ve seen state-sponsored media outlets and the government wage their war using disinformation tactics as well as physical means. NewsGuard has already flagged emerging false claims regarding the Israel/Hamas war, days after the war began.”

As online discourse becomes more polarized, she adds, both sides of the political aisle increasingly weaponize information against each other, leading to a “distortion of the truth” and “a rise in exaggerated claims that soon become full-blown misinformation.

“Whether it’s the secretly partisan-funded “pink slime” websites masquerading as local news publishers that spent approximately $3.94 million on ad campaigns in the runup to the 2022 midterms, or Trump’s Truth Social actively boosting the online extremist movement by promoting QAnon content, evidence of coordinated influence campaigns and weaponized ‘information’ will only become more prevalent.”

The generative AI revolution presents a great threat to trust in information and the business of journalism, Veena acknowledges. “Our research has identified an alarming propensity for generative AI chatbots to respond to prompts about topics in the news with well-written, persuasive, and entirely false accounts of the news: in some cases, complying with 100% of requests to propagate misinformation.”

For bad actors looking to create and disseminate fake news and disinformation, generative AI models offer the capability to do so at unprecedented scale, she says. “Responsible data can prevent AI models from spreading conspiracy theories and other falsehoods. It requires human judgment and accountability, which NewsGuard is uniquely equipped to provide.

“Companies offering generative AI products can license NewsGuard’s human-curated data to fine-tune their models and to create post-processing guardrails to recognize and debunk demonstrably false narratives, and treat content from trustworthy news sites differently than content from misinformation sites.”

For those tasked with monitoring, disproving and removing fake stories from social platforms, the virality of this type of content presents a formidable challenge. A 2018 study by MIT scholars found that it took true stories around six times as long to reach 1,500 people on Twitter/X, as it did for false stories to reach the same number of people.

“Policing false information at scale requires a multi-pronged approach,” says Alex Popken, WebPurify’s VP of Trust & Safety. “First, platforms must develop policies governing content integrity and authenticity; then, they integrate with expert-led services like NewsGuard to keep abreast of the most harmful viral trends. Finally, they leverage WebPurify’s AI and human moderators to remove or label content that conflicts with these policies.”

Better digital literacy

Promoting digital literacy among the general public and empowering users to make their own, informed decisions can also help to alleviate some of the burden.

It’s an area that NewsGuard is also focusing on with its Media Literacy Programs, created with the help of Microsoft. “More than 800 public libraries globally use NewsGuard’s browser extension on their computers,” reveals Veena, “and educators worldwide consult NewsGuard’s extension and other educational resources to help students develop source evaluation skills.”

Labeling the credibility of information sources is a route to helping users navigate the landscape of online news and decide whether a piece of content is worth trusting and sharing, Veena suggests. By providing greater context and enabling users to reach their own conclusions, she says, “platforms can move away from playing claim-by-claim whack-a-mole with individual pieces of content.”

Despite its proliferation on social media, fake news has a long history in print media. The Sun newspaper’s ‘Great Moon Hoax’ of August 1835 reportedly led to an increase in circulation that made it the most popular newspaper in the world.

Today’s fake news is less focused on satire and entertainment, and more likely to be deployed to disrupt, destabilize and discredit chosen targets. It can have damaging real-world consequences, as evidenced by the ‘infodemic’ that led to panic, fear and depression during the Covid-19 pandemic.

At a time when politicians routinely label independent mainstream media as ‘fake news’ and position their own ‘facts’ as truth, we’re truly through the looking glass.

The tools exist to combat fake news. Human and AI-powered content moderation, community-based interventions for adding context, and strategic communication to amplify stories from trusted sources can all be deployed to counter false narratives. Now, perhaps more than ever there is a need to weaponize the truth.