Request a Demo Blog

Beyond NSFW: Debunking common myths about content moderation

September 29, 2023 | UGC

Just as the enigmatic monolith in Stanley Kubrick’s 2001: A Space Odyssey served as a catalyst for human evolution, the dawn of Web 2.0 showed us a new way to communicate and interact through the power of user-generated content (UGC). But nearly two decades later, this UGC monolith has evolved beyond just a revolutionary way to keep up with friends and family, becoming a foundational element of our life online.

UGC is no longer limited to social media. It’s at the heart of our dating apps, woven into e-commerce, the engine behind product customization and so much more. Yet, when the conversation turns to moderating UGC, many misconceptions still tend to dominate the narrative. Too often, content moderation is seen as a black-and-white process, primarily focused on flagging or removing Not Safe For Work (NSFW) material. But the reality is far more intricate, layered, and sometimes even surprising.

The role of content moderation extends well beyond policing explicit or graphic content. It encompasses the safekeeping of brand identity, fostering a positive user experience, and, at its core, preventing real-world harm to your platform’s users. Effective content moderation serves as a guardian that often works in silence but has a significant impact across both digital spaces and life in the real world.

Below, we aim to debunk some of the most prevalent myths surrounding content moderation services. We’ll dissect why these myths persist, shed light on the complexities that often go unnoticed, and offer insights grounded in our 17 years of industry experience.

Common myths about content moderation

Myth 1. Content moderation primarily focuses on blocking or removing NSFW material

While NSFW (Not Safe For Work) material is one of its primary targets, content moderation’s scope can be much broader, demanding wide-ranging skills of content review teams. Moderators often have to identify and take action on hate speech, IP infringement, misinformation, harassment, and content that is culturally insensitive or violates highly specific, custom community guidelines.

Let’s say a national restaurant chain is having a photo contest and it doesn’t want to promote images that show the over-consumption of food. It can take two weeks, with no shortage of internal back-and-forth between multiple teams in a company, just to define what counts as “over-consumption of food.” After all, if you can’t clearly define it, you can’t consistently enforce it. A client might at the outset believe it’s very black and white but, for example, over-consumption of fruits and vegetables might not be perceived the same way as over-consumption of fast food or sugary items. Where do you draw the line?

Likewise, a picture of a messy table filled with half-eaten food could be interpreted differently than a neatly presented banquet. Is one more indicative of over-consumption than the other? You also need to consider if the photo is meant to be humorous, satirical, or artistic. These are all things that need to be taken into account in advance of defining the rules, and part of our job is to help clients do this exact type of preparation and consideration.

And it’s important to remember that different platforms have unique user demographics and community standards that require tailored moderation policies. What works for a gaming platform may not work for an online educational community.

Myth 2. All content moderators are routinely exposed to traumatic content

Contrary to the belief that content moderation is only about sifting through disturbing content, many moderators work on lighter issues such as removing spam or low-quality content. It is a specialized subset of moderation that deals with highly sensitive or triggering material, if not simply because the majority of unwanted content isn’t that egregious but, when it is, only specially trained and vetted persons are fit for the task. For more on this type of specialized content moderation, download our eBook that profiles WebPurify’s CSAM (Child Sexual Abuse Material) moderation team or read more about the insights we’ve learned from that type of difficult, but important, work.

Myth 3. AI algorithms alone can manage all types of content moderation

While AI can filter a large volume of content for obvious violations, human expertise is often necessary for interpreting nuance and context, such as detecting sarcasm or cultural references. For instance, a computer might not understand why a particular symbol is offensive to a certain culture. Likewise, AI might flag the text, “Kill me now,” and send it to our human moderators for review. Our human moderators can ascertain that this was said sarcastically, as is often the case.

Myth 4. Content moderation decisions are straightforward yes-or-no jobs

Decisions often fall into gray areas that require careful assessment and sometimes even collective decision-making. Consider, for example, a piece of art containing nudity. A platform has a policy prohibiting adult sexual content, but it also allows for artistic expression. Should this scenario be permitted?
To the uninvested, this might come across as splitting hairs, but ultimately what matters is defined by our client brands, and many companies look to tailor allowed content closely to the preferences of their user base.

Myth 5. Content moderators set the rules

A common misconception is that content moderators are the arbiters of morality, setting the rules of what can and cannot be shared. In reality, content moderators don’t create the rules. We are simply the enforcers of a platform’s community guidelines. Take, for example, niche dating or hookup sites where the norms (and types of permitted overtures or expectations) can differ dramatically from mainstream social media.

On such platforms, nudity in images might be permissible, when gated, aligning with the service’s more liberal approach to adult interactions. Here, the primary concern is not so much about NSFW content but rather the prevalence of catfishing scams, when someone creates a fake dating profile to deceive others and cause everything from emotional trauma to financial loss for the deceived party. In these instances, content moderators aren’t looking to limit anyone’s freedom of expression (so long as that expression is between two consenting adults); rather, they are trained to be vigilant for signs of deceptive behavior or the use of stolen images.

Myth 6. Content moderation only occurs after content has been posted

Some people think that content moderation springs into action only after content has been uploaded and flagged by users. This reactive model is often perceived as the standard approach when, in fact, many responsible platforms are increasingly proactive in identifying and managing problematic material.

Modern content moderation is anticipatory, and in the case of AI, it transpires the moment someone clicks “post” – being completed in milliseconds. These technologies can identify a broad range of problematic content, from hate speech and harassment to gambling and gore.

Of course, there are reactive use cases for content moderation too – for example, users reporting problematic content that is already live on a platform for further review and remediation.

Myth 7. Faster moderation automatically equates to better moderation

Rushed decisions lead to false positives or negatives. Good content moderation is a balance between timeliness and accuracy, which we outline in our article on how to measure the success of content moderation.

Myth 8. Only newly uploaded content needs to be moderated

There are a number of reasons why platforms should be concerned about old content. For one, societal attitudes and norms are not static; they evolve over time. Content that may have been acceptable years ago could now be viewed as insensitive, discriminatory or even harmful.

Older posts can also suddenly take on new significance in light of current events. For instance, a post joking about a natural disaster may have been overlooked at the time it was uploaded but becomes glaringly inappropriate when a similar disaster occurs later on, perhaps this time with fatalities. Likewise, a post idolizing or defending a public figure may age very poorly if serious wrongdoing by that person comes to light down the line. And, of course, community guidelines and laws also change, requiring platforms to retroactively scrutinize existing content.

Myth 9. The primary goal of content moderation is to censor viewpoints

One of the most contentious myths surrounding content moderation is the belief that its primary function is to censor or stifle differing opinions. To put it simply, the main goal of content moderation is to create a safe environment. That’s it.

Content moderators aren’t agents sent to shape the narrative of a debate or limit free speech. Content moderators simply enforce a platform’s community guidelines to help foster a welcoming space for all users. These guidelines often prohibit hate speech, harassment, misinformation, and other forms of harmful content that could be both detrimental to the user experience and even pose real-world risks.

Take, for example, those platforms that had to introduce new policies around the discussion of COVID-19. The aim wasn’t to stifle discourse but to curb the spread of harmful misinformation that could have real-world health implications.

Content moderation aims to be inclusive, allowing for a variety of viewpoints to be expressed, but within the boundaries set by community guidelines, laws, and societal norms.

Myth 10. Only large platforms need content moderation

Small forums, niche communities, and specialized platforms are not immune to the challenges that come with user-generated content. Harmful content and conduct can infiltrate even the tiniest online spaces, and in some cases, smaller communities may be even more vulnerable due to limited resources.

Smaller platforms often require a more specialized form of content moderation that reflects their unique use cases. A one-size-fits-all approach borrowed from larger platforms is rarely effective.

Myth 11. The only reason companies invest in content moderation is to comply with legal requirements

Brand integrity and user experience are often equally important reasons for investing in content moderation. As more and more of our lives are spent online, robust content moderation can serve as a differentiator for brands, setting a platform apart in the eyes of users who prioritize safety and quality of interaction.

Myth 12. Money spent on content moderation is an expense rather than an investment

Proper moderation can enhance user retention and brand reputation, making it a wise long-term investment. And for platforms that host advertising in any capacity, a brand-safe environment is critical to holding onto ad spend. Strong content moderation is a cornerstone of business longevity.

Myth 13. Crowdsourcing human moderation is an equal substitute for a dedicated team

A specialized team trained in the nuances of your community and guidelines is often far more effective than a generalized crowdsourced approach. This is because crowdsourced teams typically have little or inconsistent training and accountability. You can read more in our blog post about the dangers of crowdsourced moderation.

Myth 14. Moderation only has digital-world effects

From preventing the spread of Child Sexual Abuse Material (CSAM) to combating misinformation that could influence election outcomes or public health, effective content moderation has very tangible real-world impacts. Last year alone, WebPurify’s content moderators were responsible for the arrests of more than 500 child sexual predators.

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.