Request a Demo Blog

Misinformation vs disinformation: what’s the difference and how to stop it

March 21, 2025 | UGC

Whether it’s manipulated political content, misleading health claims, climate change denial or celebrity deepfakes, misinformation and disinformation can spread faster than ever in today’s interconnected world – and no platform is immune.

In our 19 years of keeping the internet safe, WebPurify, an IntouchCX company, has seen false information in user-generated content (UGC) evolve from all-cap comments in social media posts to the more nuanced and sophisticated deceptions we’ve seen today. But what is the difference between disinformation and misinformation? And why does it matter?

In this article, we’ll draw on expertise from serving clients across myriad industries to first define misinformation vs disinformation, and then explore the types of misinformation, its most dangerous forms – such as political misinformation on social media, vaccine misinformation, and climate misinformation – before laying out the strategies we and other trusted experts use to combat it.

Misinformation vs disinformation: the spread of false narratives and how to stop them

The difference between misinformation and disinformation

  • Misinformation: false information spread unknowingly.
  • Disinformation: false information spread intentionally to deceive.

Misinformation is defined as false or misleading information that is unintentionally presented as fact. Conversely, disinformation is deliberately created or spread to deceive or manipulate.

So, in short, the difference between misinformation vs disinformation is intent.

Disinformation also has a habit of becoming misinformation. During the Covid-19 pandemic, disinformation regarding vaccine safety emerged as a prominent challenge for content moderation teams. Initially, this disinformation was propagated deliberately by certain groups or individuals who were hoping to foment distrust and fear about the vaccines’ efficacy and safety. These false claims, ranging from exaggerated side effects to unfounded conspiracy theories about vaccine ingredients, were strategically crafted to undermine public confidence in the vaccination campaign and a crucial point in the pandemic.

However, as this disinformation permeated social media and other communication channels, it began to morph into misinformation. Well-meaning individuals, influenced by the misleading content and often lacking access to accurate information, started to share these claims unknowingly.

This transformation from disinformation to misinformation significantly impeded public health efforts, as it led to vaccine hesitancy and greater resistance among wider populations who were unintentionally spreading these inaccuracies that they believed to be true.

Examples of misinformation

One of the first widespread examples of misinformation dates back to “The Great Moon Hoax” of 1835, where a popular tabloid depicted fictional inhabitants on the moon. This claim was widely believed and sparked panic before it was eventually debunked.

Adding to this historical context, Holocaust denial is a classic example of disinformation, where falsities were deliberately spread with the intent of getting people to question the existence of the Holocaust, despite overwhelming historical evidence. This disinformation, often rooted in anti-Semitic agendas, aims to undermine and distort historical truth for ideological purposes.

The Sandy Hook conspiracy, popularized by conspiracy theorist Alex Jones, initially emerged as disinformation with false claims that the school shooting was a hoax. The claims caused the victims’ families immense distress and led to landmark defamation cases, where Jones was ordered to pay nearly $1.5 billion in compensation.

What are the common types of misinformation?

What are the common types of misinformation?

  • Political misinformation on social media
  • Health and vaccine misinformation
  • Climate misinformation
  • Financial misinformation (investment scams, stock market rumors)
  • Media & celebrity misinformation (deepfakes, fake news stories)

Misinformation takes many forms, impacting nearly all aspects of society. The examples below represent the core of misinformation challenges tackled by WebPurify. The approach to each is driven by the high volume of misleading content and the severe implications of its spread. Real-world harm is the litmus test for the intensity of WebPurify’s enforcement.

“We are deeply committed to ensuring the credibility and integrity of information online,” says Alexandra Popken, WebPurify’s VP of Trust & Safety. “We understand that in the digital age, the battle against misinformation is both complex and critical, which is why we use both state-of-the-art technology and expert human insight to identify and mitigate false information quickly and at scale.”

6 challenges of countering misinformation on social media

Political misinformation on social media

Social media platforms, due to their reach and engagement-driven algorithms, have become prime vehicles for bad actors spreading political falsehoods.

“The 2016 US election was the turning point,” says James Alexander, former Head of Safety Operations at Twitter (now X). “It’s where people woke up both from an industry standpoint as well as a regulation and public standpoint.”

The realization that platforms could be exploited to spread misinformation that undermined something as significant as an election led to a substantial increase in resources for James’ team.

State actors, political organizations, and influencers use disinformation tactics, including manipulated media, fake news articles, and AI-generated content. For example, TikTok has been used to spread doctored videos of politicians to mislead viewers.

The weaponization of news: how NewsGuard is combating false narratives online

Health and vaccine misinformation

Medical matters are prime examples of how false information spread online can have serious real-world implications. As Veena McCoole, VP of Communications and Marketing at NewsGuard (Editor’s Note: Veena has since moved on from NewsGuard after the time of writing), points out, misinformation has also become increasingly weaponized by political parties, state actors, and others with an agenda to push: “During the outbreak of Covid-19, the WHO coined the term ‘infodemic’ to describe the dangerous proliferation of misinformation associated with vaccines and the virus itself.”

One public health study found that 52 physicians practicing in 28 different specialties across the US propagated Covid-19 misinformation on vaccines, masks, and conspiracy theories on social media and other online platforms between January 2021 and December 2022.

False claims about Covid-19 vaccines, such as unverified reports of side effects or conspiracy theories about government tracking, led to vaccine hesitancy, which had a major impact on public health efforts worldwide.

Climate misinformation

When it comes to climate change, misinformation can contribute to stalling progress on environmental policies and prevent meaningful action on global warming. Myths about climate science, deliberate understatements of human impact, and overstated claims about unproven technological solutions all serve to confuse public perception of the issue.

WebPurify’s role is to flag and filter out content that contradicts the scientific consensus, helping platforms stay aligned with responsible and accurate environmental reporting.

Financial misinformation (investment scams, stock market rumors)

Financial misinformation exploits economic anxieties and a lack of financial literacy to mislead investors and manipulate markets.

Investment scams often involve false promises of high returns, misleading stock tips, or fraudulent cryptocurrency schemes. Stock market rumors spread through social media and forums are another form of financial misinformation.

Media & celebrity misinformation (deepfakes, fake news stories)

Media misinformation includes manipulated videos, fake celebrity endorsements, and sensationalized news stories.

Deepfake technology has been used to create fake speeches and fabricated statements from public figures, leading to confusion and reputational damage. Similarly, fake celebrity endorsements for financial scams have misled consumers into fraudulent schemes.

Misinformation doesn’t always come from fully AI-generated deepfakes. Often, it involves real images and videos taken out of context or subtly altered to mislead. As James points out, some of the most dangerous misinformation comes from celebrities and influential figures, themselves: “The biggest risk is [misinformation coming from] somebody who already has clout.”

How misinformation spreads

There are numerous examples of misinformation being employed throughout history in an attempt to influence people and change public opinion. But the momentum, reach and open access of today’s online platforms makes it possible to disseminate and amplify misleading information to huge numbers in highly efficient ways.

Being part of the conversation has never been easier. The democratization of information and the dynamic social media landscape provides myriad opportunities for UGC to connect with a global audience. It’s big business too. But there is also an ugly side to UGC. Its capacity to cut through, build trust and cultivate engagement has made it a prime target for abuse by bad actors.

Influencers who are economical with the truth about the products they promote are one thing, but targeted disinformation campaigns by nefarious groups are quite another.
Echo chambers and algorithmic amplification

At the heart of the matter are the algorithms, which are designed to prioritize content based on engagement rather than accuracy, meaning that the most emotionally charged, controversial, or misleading information often rises to the top of people’s feeds.

“With platforms whose algorithms surface content to users outside of their social circle, individuals need not have a conspiratorial uncle or friend-of-a-friend in order to be served up misinformation in their feed – the algorithm does it for them,” says Veena.

Most social media users tend to follow people or sources that align with their beliefs, reinforcing their pre-existing narratives. And over time, this creates echo chambers, where these users are only being exposed to viewpoints that reinforce their own perspectives. Algorithmic personalization further intensifies this effect by curating content that aligns with users’ engagement history, thus making it harder for people to encounter any opposing viewpoints.

This can create a dangerous cycle. Disinformation, when deliberately seeded by bad actors, gains traction and is unknowingly shared as misinformation by well-intentioned users.

The challenge of scale in moderating misinformation

James’ experiences at Twitter are a testament to the evolving challenges social media platforms face. His team’s early focus on synthetic and manipulated media was crucial in setting the stage for broader misinformation policies that arose later around Covid-19 and the 2020 claims of election fraud.

In the beginning, James and his team believed the solution to combating misinformation would be largely automated. But as they soon found out, humans were crucial.

He candidly admits, “We were entirely confident this was going to be a mostly automated method…but that was part of the biggest issue at the very beginning.” The nuanced nature of misinformation required discernment that went beyond algorithms.

James’ strategy was to focus on misinformation that gained visibility and could cause real-world harm. “Most tweets don’t get seen by anybody, so the big thing is to target the ones that are being seen and causing problems.” This prioritization of high-visibility misinformation over obscure falsehoods is a crucial strategy for content moderation teams.

“You have to ask: Is this claim dangerous? Does it lead to real-world harm? Does taking it down do more harm than leaving it up?” James says. “These are incredibly difficult decisions for moderation teams, and the answers aren’t always clear.”

How to combat misinformation and disinformation

How to combat misinformation and disinformation

Content moderation remains a key intervention in the fight to combat misinformation and disinformation. The problem is that it is, to a large extent, reactive and the effectiveness of the measures can be limited by the resources that are available.

“Too often, platforms rely on flagging instances of misinformation on a case-by-case basis,” Veena says. “This is impossible to scale, inevitably results in human error when some – but not all – content is flagged, and does not protect end users.”

By instituting policies to remove misinformation deemed inappropriate, Veena suggests that platforms can also “open themselves up to the ‘free speech’ and anti-censorship arguments that seek to preserve the rights of internet users to voice their opinions.”

Source credibility labeling can be an appropriate middle ground, she proposes, “enabling users to make their own decisions on whether a piece of content is worth sharing or trusting, based on its overarching editorial practices.”

At Twitter, James’ strategy was to focus on the misinformation that gained high visibility and could cause real-world harm. He emphasizes the importance of precision: “Knowing for certain that it is misinformation can actually be really hard… taking aim at specific known problematic misinformation is much more valuable for the resources that are required.

“Focusing moderation efforts on misinformation that is actively spreading is key. If something is already 48 hours old and has stopped being shared, chasing it down isn’t as valuable as catching the next piece of viral misinformation before it takes off.”

The Chinese lab leak theory around Covid-19 is a perfect example of the complexity of the task. Initially, content suggesting the virus originated from a lab in China was marked as dangerous misinformation and suppressed. However, as discourse and information about this theory evolved, Twitter had to adapt. “We backed off on that as more information came out…we didn’t want to be tipping the scale when we didn’t actually know the right answer.”

In this environment of uncertainty, counter-speech emerged as a potential tool to combat misinformation without outright censorship. James suggests that sometimes the answer to misinformation may not be to silence it but to allow it to be challenged.

Partnering with third parties that can provide unbiased fact-checking and/or threat intelligence around current viral misinformation campaigns is another worthwhile course of action.

“You’re not going to be an expert in everything,” James says, “so make sure that you have good partners, maybe in the news industry or in research investigations who can help lean on third-party expertise.”

Promoting digital literacy

Providing users with the critical skills they need to discern what they’re consuming is an effective proactive intervention. Veena highlights three ways that well-intentioned people can avoid being deceived by misinformation:

Look at the journalistic transparency and credibility of a source to make a more informed decision about whether the link or news article is something you can trust.
Practice lateral reading and cross-reference any claims you come across in articles with trusted sources.

Consider whether an image, video, or article you’re looking at is authentic or the spawn of generative AI. Tools like GPTZero and Hive Moderation can help detect if a piece of content was AI-generated.

Prebunking – or preemptive debunking – is a technique that can be employed to help people be less susceptible to misinformation techniques. Empowering online communities to counter unreliable content is another solution, particularly when faced with a high volume of viral misinformation.

Accountability

Holding individuals and entities accountable for promoting misinformation and disinformation through effective enforcement and regulatory measures is the endgame. But there are challenges in enforcing misinformation policies when the information available on certain topics is always changing.

“The Covid-19 pandemic origin is a classic example of this,” explains Veena. “New information emerged after the fact and changed the context of previous instances of ‘misinformation.’ This is why NewsGuard is careful to only debunk probably false statements for which there is credible evidence to the contrary.”

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.