Request a Demo Blog

The Top 6 Tips for Addressing Content Concerns Using Moderation in 2022

December 29, 2021 | UGC

In the past, most brands’ primary concern was using content to attract users to their platform. Today, the amount of information online has grown substantially and the uses for user-generated content have evolved simultaneously. 

86% of U.S. adults get their news from a smartphone and 48% of U.S. adults say they turn to social media for news. 

With this in mind, it’s no wonder that inappropriate and inaccurate content has spread like wildfire over the past few years. In response, organizations must go the extra mile to moderate their platforms for content that is both fake and dangerous. 

Today we’ll look at the top 6 things you need to know about the direction content is heading in 2022, as well as suggestions for using content moderation to address these concerns for the safety of your online community and your brand’s reputation: 

1. Photo and Video Moderation Must be a Priority

Photos, audio, and video now make up the majority of content found online. And since written content is no longer king, older technology used to detect offensive language is being supplemented by tech targeting offensive image and video content. 

To detect harmful content in images and video, moderation demands advanced technology. One of the best allies in keeping damaging content off your brand’s platform is to implement advanced AI technology to reject blatant violations and flag content that is suspicious, escalating it to live moderators to analyze further.

2. The Spread of Misinformation Must be Prevented

10% of adults admit to intentionally sharing fake news online, while up to 86% of all internet users report being fooled by misinformation at one point. These immense amounts of unverified information continue to find their way onto online platforms – content that could at the very least be offensive, as was the case with the #BlackandWhitephotographs of 2020. 

This campaign was originally intended to protest the murder of Pinar Gültekin, a 27-year-old Turkish student, as well as raise awareness about the spike in violence against women in Turkey. By late July of 2020, the #BlackandWhitephotographs challenge (also known as #WomenSupportingWomen #ChallengeAccepted) morphed into a social media trend. In response, Turkish women took to social media to express their frustration over seeing the challenge’s original purpose be buried under a flood of selfies with vague or vain purposes. 

At its worst, unverified information can negatively impact millions by leading to violence, governmental changes, and even illness and death when shared in large numbers. The misinformation about treating COVID-19 with ivermectin that circulated on social media is one such example of how dangerous fake news can be. 

Across the U.S., poison control centers have reported an alarming rise in ivermectin overdoses and adverse side-effects from using the drug off-label as a supplement to or replacement for CDC-approved vaccines. As a result, Facebook, Google, and other big players have been widely criticized for failing to thoroughly enforce rules inhibiting the spread of ivermectin misinformation.

To prevent offensive comments, fake recommendations, and dangerous misinformation on your platform, set up a reliable image and video moderation process by collaborating with a content moderation service professional that can help navigate your specific moderation needs. At WebPurify, we use a hybrid system of AI and live human moderation to scan for and block inappropriate and harmful user-generated content before it can be shared.

3. Content Moderation Must Be Proactive

The language of bad actors online is in constant flux, but simply blocking words that are “bad” by running a simple algorithm is no longer sufficient. Savvy users with ill-intent are combining words and phrases that would otherwise be harmless on their own to circumnavigate AI.

The changing linguistic landscape of groups encouraging hate speech, racism, and violence on the internet calls for content moderation that is proactive in detecting and eliminating harmful content before it can reach your audience. Look for AI that begins the work of identifying content that is obviously harmful, while escalating content falling into gray areas to human moderators that can distinguish nuance in speech or images.

Additionally, work with a moderation expert who’s human team continuously gathers and reviews content trends. The team of moderators should also have access to a database of malicious content themes that is frequently updated with new ideas, phrases, and symbols being shared by bad actors. With this database at their fingertips, content moderators can be as proactive as possible.

4. AI Must Aid in Combating Cyberbullying 

The increase in screen time induced by the pandemic has yet to drop, as 70% of kids and teens are estimated to spend an average of four hours per day in front of a device. This rise in screen time has been linked to a spike in cyberbullying, exacerbated by savvy cyberbullies who use new ways to bypass technology that social media platforms and social networks put in place to prevent bullying on the internet.

This abusive content isn’t the easiest to identify, however content moderation experts who work behind the scenes 24/7 are trained to scan for profanity, destructive content, and instances of cyberbullying on online dating platforms, children’s sites, social media channels, and in video game chats.

To combat different forms of cyberbullying, WebPurify uses a hybrid approach, pairing AI and live moderation to identify user-generated images and video that are high-risk, decreasing the spread of this harmful content and reducing instances of cyberbullying. Our live moderators scan for nuances, and review images and text that have a sarcastic or suspicious tone to flag cyberbullying. Additionally, our proprietary AI-based offensive intent Smart Screen technology can be combined with profanity filtering for precise moderation based on phraseology and context in 7 categories, including detecting racism, personal attacks, bullying, and mental health issues. 

5. Content Moderation Must Respond to Current Events in Near-Real Time

During the start of the BLM movement, users were posting black squares that were being rejected by many automated systems that block uploads of seemingly blank images.

During and after the January 6th riots on Capitol Hill, what were once regarded as innocuous photos may take on a whole new, and possibly harmful meaning. And while AI successfully detects and removes content containing illegal activity, hate speech, nudity, weapons, or other harmful expressions, it struggles with context. The result can be a failure to catch dangerous submissions, such as images of an individual in Nancy Pelosi’s office. 

In these instances, it will take a team of well-trained human moderators who are up to speed on current events to scan for and react properly to photos that aren’t traditionally labeled as inappropriate, but are of just concern in light of current events. 

Since AI can’t distinguish nuance in speech or images the way humans can, depending exclusively on AI for content moderation can also result in moderation decision errors, like the false positives that occur when a black square is determined to be problematic, while it actually is not. To address real-time concerns over deciphering gray areas, use AI to remove any blatantly objectionable content such as pornography or violence, and use a human team to review content for more nuanced and brand-specific criteria. 

Protect your audience and brand by teaming up with live content moderators who can flag social media activity that promotes even subtle violence (should it slip past AI) while understanding when a black square is more than a blank image!

6. Content Moderation Must Address Spamming & Phishing Concerns

As we wrap up 2021, it is becoming easier than ever for anyone to publish anything at any time and unfortunately, this includes phishing scammers, spammers, and online trolls.

From dating platforms to gaming sites, profiles can be used to share and spread phishing scams and schemes. And phishing, or luring individuals into giving out credit card numbers, passwords, and other personal information, can result in identity theft. 

As if content creators with ill intent aren’t enough of a moderation concern, machine-generated content has become more sophisticated, presenting a growing set of problems for existing platforms. Specifically, malicious organizations are becoming better at circumnavigating a platform’s account verification mechanism, which allows them to upload content that threatens your audience’s experience. Whether the source be a spam bot or a human, platforms must filter all content to combat these growing threats and remove harmful content.  

To prevent fraudulent activity, collaborate with a content moderation expert to create and enforce the privacy guidelines that your platform sets. To prevent phishing scams on your gaming site or dating platform, work with a content moderation professional to review each of the profiles registered, confirm that the information each user submits is authentic, and scan interactions between end-users. In the case of children’s online platforms, any user suggesting to take conversations off the platform should be flagged. 

And on both children’s platforms and dating platforms, the sharing of personal contact information also requires careful moderation. This is where an automated filter comes into play, screening for email addresses and phone numbers, and keeping conversations on the platform. 

The Takeaway: 

As the year comes to an end, now is the time to analyze the content moderation tools your brand utilizes to address the content concerns that became prevalent in 2021. 

When it comes to content moderation, advanced video and image moderation, the ability to catch and remove misinformation, a proactive approach to detecting and eliminating harmful content, and the ability to respond to current events in near-real time are paramount.