5 Content Moderation Mistakes to Avoid
May 12, 2021 | Image Moderation, Video Moderation, UGCWhile the future of many apps and platforms is based on adherence to the policies of Apple, Google, and other tech giants, it is also based on user experience.
Is your goal for 2021 to provide users with a safe experience by implementing the strongest moderation capabilities, while also creating an environment where they can express themselves?
To proactively prevent, identify, and filter objectionable content to protect the health and safety of users and moderators alike, avoid the most common content moderation mistakes:
Mistake #1: Crowdsourcing Moderation
In today’s digital age, content moderation is no longer optional. And the task of scanning for inappropriate content to ensure safe online communities 24/7 is increasingly being placed in the hands of human moderators.
Some companies settle for crowdsourcing moderation, the act of sourcing labor from large online groups of moderators. This option can be enticing, as these individuals typically offer remarkably low prices.
Unfortunately, crowdsourced moderators are typically anonymous, leaving little room for accountability to your company and leading to unmotivated moderators who fail to provide thorough reviews. Without formal accountability, there’s little stopping a crowdsourced laborer from applying their personal perspectives and values to moderation projects.
Even if the crowdsourced moderator takes their time scanning content before accepting or rejecting it, that doesn’t guarantee that they have a clear understanding of your brand and moderation criteria.
To avoid the risk of exposing your audience to offensive content, don’t make the mistake of trusting crowdsourced laborers with your moderation needs. Brands that are seeking a budget-friendly solution that is free of the risks associated with crowdsourcing will find that using a combination of trained professionals and artificial intelligence for moderating content in real-time is the most effective approach.
Mistake #2: Failing to Train Moderators
With a hybrid approach to moderation, content is escalated to moderators when AI falls short. Human moderators can then make the more nuanced decisions when it comes to content’s tone and context. Failing to properly train these moderators on the complexities of your brand’s unique content guidelines by arming them with clear rules to follow is one of the most common moderation mistakes we see.
Training moderators is rarely a simple process since each rule typically requires a deep understanding of the various possible edge cases. Be careful of rules that are open to interpretation, such as asking content moderators to flag images for “irresponsible drinking” or “doing something dangerous” without clearly defining what is meant by those categories.
Take “irresponsible drinking,” for instance. If you sell wine to an audience of adults, your definition will be vastly different from a brand that sells games to children. To mitigate any uncertainty, clearly define the criteria and rules for your human moderators.
To set up a successful content moderation strategy and team, define your brand’s standards rather than leave moderation up to interpretation. Begin by thinking about who your audience is and what they value.
Moderators trained to address violations that fall into the gray areas can effectively make final image moderation decisions that align with your brand standards, establishing safer communities as a result. Be sure to look for a company with moderators that are all highly trained, as well as continually assessed for accuracy and speed.
Mistake #3: Underestimating the Importance of Community Guidelines
Community guidelines set the standard for how your community members behave online. Without community guidelines, your members won’t know what type of content is acceptable and what is inappropriate.
As a result, moderators will have the time-consuming task of deliberating each time a post in the online community falls into a gray area. When moderators do remove a post, expect frustration on the user’s end over having their content removed for no apparent reason.
If you don’t have community guidelines in place, or you do but they are outdated and vague, it’s time to draft new guidelines. A strong online community starts with guidelines that point back to your organization’s mission statement, including what your organization does and how, who your organization serves, and what unique value you offer to both employees and consumers.
Once your company has laid this foundation, you have a context for the community guidelines and it’s time to determine what conduct will and will not be tolerated.
If you rattle off a generic list of rules and call it a day, your guidelines will be open for members’ interpretation.
It’s better to be specific from the beginning, offering detailed examples. For instance, after the subheading “No illegal activities and regulated goods,” give examples of what is not tolerated by your online community, including guns, explosives, and non-weapons being used for violent purposes (as TikTok does in their guidelines).
While you’re at it, spell out exactly what actions will be taken in response to guideline violations and enforce these consequences by removing any content that violates the guidelines. You can go the extra mile and provide members with the means to report content that they suspect to be in violation of your community guidelines through flags or other tools.
Just don’t depend solely on your online community to report other individuals for suspected or actual violations of community standards. While most users will report others for violating a standard, it’s also possible that they may report someone who has a different perspective from their own.
Keep in mind that good content moderation is more than clear community guidelines and frequent deletion of content based on reported or flagged posts. It’s important to treat community guidelines as one part of a successful content moderation strategy that also includes a mix of technology and human moderation.
Mistake #4: Turning All Content Moderation Over to Technology
Turning all of your content moderation efforts over to algorithms and technology may seem attractive. And there’s no denying that AI is responsible for the massive task of detecting and removing millions of posts containing drugs, hate speech, nudity, weapons, or offensive gestures. But too often, complete replacement of human moderators with technology creates a new set of problems.
AI struggles with context, resulting in occasional rejection of harmless content or, even more alarming, failure to catch a harmful submission. Since AI can’t distinguish nuance in speech or images the way humans can, AI trained on millions of examples may still make mistakes that humans will not.
Depending exclusively on AI for content moderation can result in moderation decision errors, like the false positives that occur when the system decides that a piece of content is problematic, while it actually is not.
False negatives have also been known to occur when the moderation system accepts a piece of content that it decides is not problematic when it is.
Since automation can detect nudity better than it can recognize the complexities of hate speech, it gives the appearance that moderation is placing more emphasis on policing someone’s body than on speech that might actually be offensive. In the end, your company may inadvertently offend the very users you meant to protect.
Relying completely on technology for content moderation services has its downfalls, so don’t dismiss human moderators who can decipher gray areas. Instead, use AI to remove any overtly objectionable content such as hate symbols or pornography. And use a human team to review the content for more nuanced and brand-specific criteria.
Tech fail: Earlier this year Facebook inadvertently removed a harmless Revolutionary War reenactment page in its efforts to ban violent groups from the platform.
Mistake #5: Neglecting to Prioritize Moderators’ Mental Health
It’s clear that relying completely on AI, without live moderation, isn’t the answer when moderating user-generated content (UGC). The nuanced decisions that human teams can make are integral to the online community, and as such, they must be involved in content moderation.
It would be a mistake, however, to bring human moderators on board without taking specific measures to protect their mental health and ensure that they have a safe work environment. Consider that live moderators spend most of their workday reviewing content to ensure that it protects the public while supporting your brand’s standards and values.
In the process of protecting the public on the other side of the screen, and in turn your company’s image, moderators may be exposed to content that can be upsetting and even taxing to their mental health.
The moderation team is there to protect your brand’s reputation among other tasks and in turn, you must ensure that they are properly supported and reminded of their value. In addition, if you disregard their working conditions, as some brands have been accused of in the past, you could receive a reputation as a company that is not concerned with the mental wellbeing of their workforce.
To protect live moderators, mental health programming is especially important. Rather than perform moderation in-house or crowdsource moderation, partner with a professional moderation company that prioritizes the mental health of their moderation team, as well as the overall working conditions.
The best professional moderation agencies offer a comprehensive mental health program to anyone who will be moderating your platform’s content. Additionally, they regularly rotate moderators to less severe projects, giving them a break from seeing upsetting content on a daily basis.
To be sure that a qualified partner is chosen, WebPurify suggests posing this series of questions to any prospective content moderation partner.
The Takeaway
The biggest content moderation mistake is only emphasizing and investing in one moderation tool. With the right mix of an online community that has access to guidelines and a means of reporting concerning content, AI that can recognize certain words, patterns, and other harmful content, and a well-trained live team that has access to mental health programs, you can ensure the safety of humans on both sides of the screen, prevent false positives and negatives, and improve online engagement for the success of your company.