Request a Demo Blog

5 Content Trends that Demand Moderation

November 10, 2021 | UGC

Having an online presence is no longer a luxury reserved for tech-savvy companies and global brands. It’s 2021, and by now most organizations realize that a strong online presence is imperative. From local e-commerce marketplaces to worldwide online communities, most brands have a blog or social media channel at the very least.

At the same time, content creators are churning out photos, videos, comments, and more at an unprecedented rate. To say that anyone may publish anything at any time is not an exaggeration. Unfortunately, this includes cyberbullies, online trolls, phishing scammers, and spammers.

And as platforms and chat features become more immediate and accessible than ever, there are some new content trends on the block that come with their own set of risks. Here are the top 5 content trends so far in 2021, and what moderating each means for the safety of your brand and online community:

1. Memes

The mashup of images with some form of text is nothing new. Memes as content are just as popular as ever… and increasingly problematic.

While memes can be fun and amusing, all memes aren’t suitable for all audiences. And although many a meme relies on sarcasm and satire to make a joke, there is a rise in memes that use offensive words or have underlying tones of discrimination or racisim.

A meme used for this purpose will likely be obvious to an experienced human moderator, who is trained to look at the context of an image that incorporates some form of text. It’s this subtlety, however, that makes memes difficult for AI to moderate. AI uses programming and techniques to first break down an image into aspects of picture and text elements, and then analyze each component separately.

While AI may detect blatant offensive words or images, it may fail to detect how the meaning of a word or phrases can change from innocent to inappropriate when paired with certain photos. As long as memes are getting shared, human image moderation will be essential to identifying context.

2. Live and In-App Chats

From the customer service rep who is berated by a customer in live chat to the gamer who takes it a little too far and verbally harasses others during in-game chats, in-app chats can do more harm than good if left unmoderated. But simply blocking words that are “bad” isn’t enough anymore. Clever users can convey ill-will with the use of seemingly appropriate words and phrases.

Live unboxing videos, customer service in-app chats, and live-streamed jam sessions are all excellent ways to engage any audience. It’s up to you, however, to go the extra mile to keep the live stream on brand, while also protecting viewers from potentially offensive content posted by other users.

If you offer live streaming from your own custom app or website, at the very least, you should utilize a standard profanity filter to moderate the live chat. An even better approach, however, is to utilize a more advanced context-based text analysis tool, like WebPurify’s Smart Screen. This will allow you to flag phrases that convey bullying, personal attacks, criminal behavior, bigotry, mental health issues, and more. Look for a service that offers the option to customize your own “block” and “allow” lists, as well as coverage for multiple languages.

The visual content of a live stream is equally important to monitor. This can be accomplished using a combination of AI and live teams. You will need to determine which AI threat scores will determine immediate removal of a live stream (for example, 95% likelihood of raw nudity) as opposed to content with mid to upper-range threat scores that will require human review to validate.

Escalating any content to live teams once reported by users is essential. Once any content is escalated to moderators due to AI scoring or user reporting, the moderators can elect to pull the plug on content that is deemed harmful, hateful, illegal, off topic, or otherwise questionable.

Other scalable approaches to live stream moderation include having moderators monitor all new “first-time” streamers or those who have been offenders in the past. An effective text moderation tool will allow the team to monitor multiple first-time live streams at the same time, on a single screen, quickly switching between them to check the audio.

If you are using a platform like YouTube or Instagram Live, be sure to adhere to that platform’s community guidelines, as well as establish your own professional stream chat rules. It’s also important to be clear about the consequences for breaking the set rules and enforce them. For instance, you may choose to give a warning for first-time offenders and ban a viewer entirely should they become a repeat offender. 

3. Nuance in Speech or Images

If we’ve learned anything over the past year, it’s that unforeseen cultural, political, or religious issues require sensitivity when they emerge. This means that content moderation policies may have to change quickly to address gray areas. 

What may have been harmless content six months ago might be offensive in light of current events. In this case, AI can begin the work of identifying content that is blatantly harmful, while content that falls into gray areas should be escalated to human moderators that can distinguish nuance in speech or images.

Even then, it can be difficult for human moderators to decide how to address content that could easily be interpreted as supportive (positive) or sarcastic (negative). In these instances where it may be unclear which way moderation should go, communication and detailed guidelines become essential.

4. Spam and Machine-Generated Content

Human creators with insensitivity or worse, ill-intentions, aren’t the only content challenge in 2021. Machine-generated content has become more sophisticated, presenting additional problems for existing platforms. 

Malicious organizations are becoming better at bypassing a platform’s account verification mechanism, enabling them to upload content that compromises your audience’s experience. Whether the source be AI, a spam bot, or a real user, platforms need to filter all content to combat these growing threats and remove damaging content. 

5. Content in Other Languages

There is a growing need for content moderation that is multilingual and AI that recognizes an array of languages, as well as the social contexts of the cultures associated with these languages.

AI, however, can have trouble moderating the proliferation of foreign language content. For example, Facebook’s AI moderation ostensibly can’t interpret many languages used on the site. What’s more, Facebook’s human moderators can’t speak languages used in some foreign markets Facebook has moved into. The result is that users in some countries are more vulnerable to harmful content. 

This is the primary reason that WebPurify works with languages in addition to English and has a growing list of languages that we support

Conclusion: Adopt a Hybrid Content Moderation Approach

The demand for content moderators has increased as more organizations realize the value of having a solid online presence. To keep a brand’s identity and its community safe from harmful content, moderation is a must. 

It’s clear that content moderation will continue to rely on humans who can distinguish nuance that AI would miss. Content moderation teams, however, need AI when it comes to monitoring large volumes of user-generated content. AI can also be used to protect moderators from content that could otherwise take a negative toll on their mental health.

To successfully moderate for content that can be harmful to your audience and your brand, look for a company of experts offering moderation that blends the power of technology and humans. At WebPurify, we offer advanced AI services, custom content moderation plans, affordable turnkey services, and dedicated live teams that can be trained on your specific moderation criteria.