Request a Demo Blog

6 challenges of countering misinformation on social media

December 8, 2023 | UGC

Social media gives misinformation wings. It’s easy to make lies fly on platforms that deal in the instant gratification of reposts and likes, but countering misleading content is decidedly harder. The majority of interventions are carried out reactively, which means the damage may already have been done.

The onus is always on social media platforms to do more. A study by scholars at USC and Yale SOM concludes that the reward structure of social media platforms serves to amplify misinformation and disinformation. “We need to create better environments on social media platforms to help people make better decisions,” says co-author of the study, Gizem Ceylan. “We cannot keep on blaming users for showing political biases or being lazy for the misinformation problem.”

So what are the challenges facing those on the frontline of dealing with social media’s misinformation problem? Where should their priorities lie when countering the fog of false and inaccurate posts? James Alexander, former Global Head of Illegal Content & Media Operations at Twitter, offers his insight.

6 challenges of countering misinformation on social media

1. Free speech vs the need to counter misinformation

Striking the balance between the need to preserve free speech and the need to respond to misleading content is a key challenge. There’s a strong desire to do the minimal amount of intervention necessary, James points out: “You don’t want to take action until you know for certain that it is misinformation.

“Trying to make sure that you’re as confident as you can be in knowing the answer can be very labor-intensive, so taking aim at specific known problematic misinformation is much more valuable for the resources that are required. Most tweets don’t get seen by anybody, so you really want to target the ones that are being seen and are causing problems.”

In many cases, adding warnings or surfacing counterspeech can be a better solution than simply removing misinformation, suggests James. “With political misinformation for example, in most cases we kept it up. If we had removed it, we might have been limiting a politician’s ability to say what they have a right to say, even if we don’t like it.

“You saw that happen with some of Trump’s posts. A lot of them were in violation of our policy – which we highlighted – but we kept them up because everybody had a right to know that he had said what he did.”

2. Manipulated media vs synthetic media

Deepfakes and other synthetic media are relatively easy to identify, but manipulated media can be much harder to deal with.

James explains the challenge that his team at Twitter faced when trying to combat manipulated media on the platform: “An image or a video that was created fresh from whole cloth is synthetic. However, that’s not easy to do in a way that is incredibly reliable. Even the synthetic media that went viral was usually very obviously synthetic. So that wasn’t really what was causing the problem. It’s much more convincing – and more damaging – to take a real image or video and use that instead.

“Say that someone uses a video of a riot that happened in an eastern European country in 2010 and then says that this riot is actually happening in Detroit in 2019 or in 2020. If I want to take that down automatically, I then have to think about context. There might be the issue of people talking about the riot in 2010, so I have to decide if I’m okay with the fact that I’m going to take down good material alongside the bad.”

3. Misleading social posts from public figures

One of the biggest risks to platforms is misinformation or disinformation that comes from public figures.

The virality of misleading content that’s intentionally or unintentionally spread by celebrities and other high-profile accounts can be hard to counter. As CNN reports, a study into the influence of anti-vaccine messages and Covid misinformation shared during the pandemic highlighted the impact that news anchors and politicians, in particular, can have on influencing public opinion.

Moderating fake narratives that are transmitted through celebrity accounts can lead to accusations of an attack on freedom of speech, James says. “They can attack the platform if they don’t like the decision you made. But they’re also aware that if you take down everything other than what was posted by one person who is verified and two million people saw it, then you may have just dealt with a drop in the bucket rather than the rest of the bucket. That can almost be worse because it feels like you’re doing a lot of work and not getting any benefit from it.”

4. Fact-checking takes time

Scale is by far the biggest challenge in combating misinformation on social media. “One tweet, one incident, one topic that needs to be debunked is not that hard,” confirms James. “You can figure it out, you can prove it through a couple of different sources and then make sure that you present all the information and it’s done.

“But making sure that you find all of the posts about this topic, and that somebody can’t just post another tweet saying the same thing or something very similar that doesn’t have your message attached to it? Finding and verifying and labeling and posting all of that potential misinformation is like trying to boil the ocean.”

Crowd-sourced interventions are one way to debunk misinformation at scale. Twitter/X’s Community Notes feature is a powerful tool for adding context to posts, for example, but it’s reliant upon the speed with which the community can react to mendacious or ill-informed posts.

5. Targeted disinformation created by generative AI

The low cost and ease of content creation afforded by generative AI enables misinformation to be disseminated rapidly. It’s not just the volume of content that can be generated that’s an issue, but the possibility of manipulating groups or individuals with highly-targeted disinformation campaigns.

As Hany Farid, a professor of computer science at the University of California, Berkeley, told Wired: “You could say something like, ‘Here’s a bunch of tweets from this user. Please write me something that will be engaging to them.’ That’ll get automated. I think that’s probably coming.”

Generative AI has the potential to exacerbate the challenges of separating fact and fake, but assets produced in this way are only part of the problem, James says.

“Where it is used, I believe that, in most cases, it will be relatively obvious if it was created by AI. The larger issue will be the entire post, its usage, distribution, and the conveyed message, rather than focusing specifically on the media itself. Just because everyone can recognize that an image is fake doesn’t mean people won’t believe it if it aligns with their desired narrative.”

As James points out, speed is the real issue with generative AI: “It’s now much easier to make lots of things, especially in the case of text, and to tailor misinformation to individuals. It’s possible to iterate very quickly and I think that that might become a problem.”

6. Simple disinformation is where the problem lies

‘Don’t sweat the small stuff’ might be wise advice for life, but it isn’t necessarily true when it comes to staying on top of misinformation. It’s the simple posts on social platforms that can often prove most problematic.

“It’s really important not to get swept up by the fascinating or the interesting, but not necessarily very likely,” advises James. “People don’t usually make really complicated or complex misinformation. It exists and, of course, it can be very damaging. We’ve seen audio be a big problem for that recently and we don’t have great ways to counter audio abuse. But where possible, people are going to take that real thing and adjust it or they’re going to take that real thing and just lie about it.

“They’re going to take the easy option because they can do that 50 times and get a lot more value out of those 50 posts than they do using the complex option one time, which you figure out and take down.”

James’ advice? Get specific and be flexible. “I think this is a place where some level of war room and on-your-feet thinking can be really valuable. It’s easy to get very excited and try to get all of your options on, say, how you can deal with generative AI. However, you also have to remember that the most likely situation is going to be simpler than that. It’s going to be the old [tried-and-tested] stuff, but now they can go faster and they can adjust it at speed.”