Request a Demo Blog

Social Media Leaves Content Moderation to AI Amidst Pandemic

May 23, 2020 | UGC

 

On March 16, Facebook sent home thousands of content moderators in the midst of a global pandemic. Other social media giants like YouTube and Twitter took the same cautionary measures. In the absence of a large portion of their human teams, these companies turned almost exclusively to their Artificial Intelligence solutions to enforce their moderation policies. To complicate things, not only are there fewer human eyes reviewing content, housebound users are causing a surge in social media use.

This heavy reliance on AI led industry professionals to predict an increase in potentially harmful content passing through their filters. Within the last month, some troubling mistakes have occurred and research suggests there’s much more to come if content moderators remain unable to do their jobs.

Early Complications… or Just a Bug?

It’s no secret that a hybrid of live moderators and AI is the most effective weapon against offensive content, so being forced to rely solely on AI is less than ideal. Social media companies were forthright in warning users that they should expect errors in flagging policy violations. YouTube, for example, announced “users and creators may see increased video removals, including some videos that may not violate policies.” The company also said that, in the interim, strikes would not be issued to videos that violate the rules, unless there’s was little doubt the video is harmful.

As expected, content moderation problems began to reveal themselves early on. Last month, posts including links to many legitimate news sources, including USA Today and The Atlantic, were taken down for supposedly violating Facebook’s spam rules. However, the company was quick to chalk it up to a bug in their spam filter.

“This is not about any kind of near-term change, this was just a technical error,” said Mark Zuckerberg. Even if Facebook’s flagging of reputable news sources was merely a bug in their spam filter, it still begs the question, “how well does the company’s AI catch actual misinformation?” The expected answer is that, without humans to further evaluate the AI results, the system is flawed. This is by no means a criticism of Facebook’s AI, as this marriage of humans and AI is a general need across most moderation programs.

The Consumer Reports’ Investigation

To get a clearer sense of how effectively Facebook’s AI works, Consumer Reports put it to the test using ads for a fake organization they created. The “Self Preservation Society” disseminated ads with false information about the coronavirus echoing perhaps the most popular erroneous claims, such as the virus is a hoax or that small doses of bleach can strengthen your immune system.

What Consumer Reports found was that the social media platform didn’t flag a single ad for over a week, save for a stock photo of a respirator-style face mask, which was simply swapped with a similar looking mask that was later approved by the platform.

When Consumer Reports asked Facebook how many content moderators are now working from home, a spokeswoman for the company kept the numbers vague, telling them “a few thousand.”

Though Facebook isn’t alone in not revealing how many moderators are actively working.

A Growing Problem

The coronavirus pandemic has created many unprecedented circumstances. As mentioned earlier, social media use is surging as a result of an inability to interact with others in person. It could almost be called a perfect storm, that a swell of social media activity is exacerbating the problem by misguiding users and thus putting them and others at risk. In addition, there has been an increase in hateful content and conspiracy theories concerning China and the virus’ origin there.

What’s more, certain groups of people are more likely to come across misinformation as a result of limited human resources. Not only are moderators prioritizing certain regions, most AI algorithms aren’t trained in less widely spoken languages, meaning they’re not as effective at catching bogus content for those audiences. A report from online activist group Avaaz shows, for example, that Facebook issues significantly fewer warning labels to speakers of Italian, Spanish, and Portuguese.

End in Sight?

No one can say for sure when social media companies will send their moderators back to the office. Not even the companies themselves. Although Mark Zuckerberg stated that Facebook is aiming to let “critical employees” return to work sooner than others – critical employees being the moderators who look for content related to terrorism or self-harm – the majority of the company’s employees will be required to work from home until at least June.

“Overall, we don’t expect to have everyone back in our offices for some time,” wrote Zuckerberg. Keeping the update vague is par for the course for large social media companies, especially when it concerns content moderation methods. But the coming months are sure to reveal more about how they’re weathering the storm.