Request a Demo Blog

Beyond NSFW: 20 Industry-Specific Criteria for Content Moderation

June 7, 2021 | Image Moderation, UGC

To catch content that can pose a threat to your audience and your brand, content moderation is a necessary safety net. Moderation of content by live teams, artificial intelligence (AI), or ideally a combination of both can protect your online community and your brand’s reputation simultaneously. 

Aside from moderating for violence, nudity, and other obvious content moderation concerns, here are some special content considerations that each industry should take into account:

1) Children’s Online Platforms

Much emphasis is placed on content moderation for children’s online platforms, as it is vital to protect this audience that is especially vulnerable to harmful online content. In addition to a strong profanity filter, we’ve found that more extensive content moderation is imperative to eliminate inappropriate user-generated content (UGC) in less obvious areas.

  • PII: Even when images or videos are posted online innocently, these may contain certain details that predators can use. Personally identifiable information (PII) can include information about children’s whereabouts through something as seemingly simple as an image containing a street sign or mailbox with an address number, or a backpack or T-shirt with a school’s name on it. In light of the fact that this material can be used for predators’ “grooming,” it’s critical that any PII that might give a public viewer knowledge of a child’s location is filtered out on children’s websites.
  • Cyberbullying: Based on data compiled by advocates at Guard Child, 65% of children between the ages of eight and 14 have been involved in a cyberbullying incident. Cyberbullying is when electronic communication is used to bully an individual by sending memes or text that are threatening or intimidating in nature. To stop the spread of harmful content and decrease cyberbullying incidents, use an approach that combines AI and live moderation to detect high-risk, user-generated images and video.
  • Solicitations: Children are spending more time on virtual platforms than ever before, leaving them vulnerable to online sexual exploitation and solicitation. Online solicitation occurs when an individual 17 or older intentionally communicates electronically in a sexually explicit manner with a minor. Suggestive videos, GIFs, and other visual content online – including advertisements – should be removed from children’s online platforms immediately. And any user suggesting to take conversations off the platform should be flagged as well.

2) Dating Platforms

When it comes to moderation of dating platforms, emphasis is and should continue to be placed on privacy, keeping contact information secure, and moderating profiles for authenticity, since a profile that is obviously fake will drive users away from your platform. But other parameters around moderating some dating platform-specific content may need to be relaxed and the moderation focus be shifted to some less obvious areas. 

  • Profile Particulars: Profile moderation can be as simple as photo particulars and can include allowing only a full human face (with allowances for headwear) and never permitting a meme or a photo with two people in it. Depending on the target audience, it can be beneficial to loosen dating profile parameters and permit partial nudity on a user’s profile, such as someone in a bikini, while restricting more risqué photos to a password-protected area on your dating platform. Another aspect of dating profiles that we find requires careful moderation is the sharing of personal contact information. An automated filter can screen for email addresses and phone numbers within profiles and early communications, keeping the conversation on the dating site until an appropriate time.
  • Phishing and Fraudulent Activity: Unfortunately, profiles can be used to share and spread phishing schemes and scams. Phishing is the fraudulent practice of sending emails pretending to be from reputable sources to coax individuals into giving out credit card numbers, passwords, and other personal information. This often leads to identity theft – Something you don’t want to be associated with your brand! To prevent fraudulent activity on your dating platform, collaborate with a content moderation expert to review each of the profiles registered on the dating platform, confirm that the information each user submits is authentic, and scan interactions between end-users.
  • Inappropriate or Irrelevant Activity: In addition to moderating profiles for fraudulent activity and authenticity, content should be moderated for off-brand, non-relevant profiles, especially if your app or website represents a very specific target audience. Offensive interactions among users can result in a bad user experience, so work with a moderation team that can make sure that cyberbullying and online harassment are not allowed. Furthermore, ensure positive interactions with moderation that suspends members who send unsolicited photos and bans members who repeatedly jeopardize the safety and /or privacy of other users.

3) Gaming Sites

Moderating content on gaming sites typically focuses on the inflammatory content that can occur on in-game chats. But this is just the tip of the moderation iceberg when it comes to gaming site content. In our experience, moderation may need to be expanded to include prevention of cyberbullying and more. 

  • Cyberbullying: Reports of verbal harassment through in-game chats and in-game lobbies are on the rise. Known as “griefers,” the presence of players interfering with or sabotaging the gameplay of others and verbally harassing others during in-game chats is becoming all too common. For advanced precision based on phraseology and context in multiple categories, implementing an AI-based offensive intent technology paired with profanity filtering should be part of your gaming site moderation approach.
  • Inappropriate Profanity:  Gaming users continue to find new and creative ways to submit offensive language, so partnering with a content moderation team that is always adjusting their profanity filter algorithm is key to earning the trust of online gamers and offering a user-friendly site. An advanced algorithm can catch the various forms of offensive text in your in-game chat. If your budget only accommodates a more standard profanity filter, however, look for a service with the option to customize your own “block” and “allow” lists and moderate offensive content in multiple languages.
  • Acceptable Profanity: With an automated flagging system, curse words are flagged and removed automatically, but this isn’t always an on-brand move. Depending on your target audience, you may elect to relax moderation requirements around certain curse words and allow heated competitive language, while blocking hateful or racist exchanges, along with other content like the names of competitors’ games. You can accomplish this with custom “allow” lists, as well as an advanced content moderation service that detects the context of your in-game chats. Also, be sure to select a service that offers dedicated endpoints that accommodate high volumes of user-generated text and scale up on demand.
  • Profile Particulars: Profile moderation can start with setting parameters for users who are creating profile images based on your brand’s standards and moderation criteria. Work hand-in-hand with your content moderation agency to customize your moderation criteria. To ensure that only appropriate content is featured, partner with a live team of moderation professionals who will review all profile photos.
  • Fraudulent Activity: To prevent fraudulent activity, collaborate with a content moderation expert to closely review each of the profiles on your gaming platform and ensure implementation of the privacy guidelines that your platform sets. Your content moderation partner should offer expertise in identifying fake profiles through comprehensive verification processes.

4) E-Commerce Retailers and Online Marketplaces

With online marketplaces and new apps making it easy to post a product within minutes from smart devices, anyone can be an e-commerce retailer. To make your users’ experience safe and positive, these convenient platforms need to be moderated for the obvious issues, such as poor quality photos or videos, correct listings, and duplicate listings of the same item. Moderation of illegal products and contextually inappropriate listings, however, is important yet often overlooked.

  • Illegal Products: Online sales of illegal items can not only compromise the user experience when buying and selling online but put your brand at risk. To ensure a legal, engaging user experience, the listing of drugs, weapons, and other illegal activities and products must be prohibited. Additionally, fraudulent listings must be taken down promptly. Your content moderation partner should be able to monitor illicit listings on a consistent basis to keep your e-commerce platform free from the sale of restricted items.
  • Contextually Appropriate Content: If you’re depending solely on automated services to moderate your online e-commerce website or app, consider that AI is limited by its inability to comprehend the nuances in images and videos. While it may identify an instance of partial nudity, such as cleavage, it is most likely unable to determine if the partial nudity is contextually appropriate for the product being sold. 

For example, if a seller’s product is an office desk, an image of someone sitting at that desk in a swimsuit showing significant cleavage is unnecessary and typically not allowed. Alternatively, if the product is in fact that skimpy swimsuit, then e-commerce platforms could possibly allow for an image with cleavage when someone models the suit. Like many content moderation judgments, it is all about context. And without the help of humans, however, AI may mistakenly block that seller’s swimsuit for partial nudity. When it comes to context, the bottom line is that the best approach to moderation uses AI and non-crowdsourced human teams to review all e-commerce content.

  • Disintermediation: While offensive content can certainly drive users away from your e-commerce platform, users buying directly from sellers outside of your platform can also have a substantial economic impact on your business. The process of removing the middleman or intermediary from a future transaction, disintermediation is becoming a growing online marketplace concern. To keep sellers from attempting to direct people off of your e-commerce platform to purchase directly from them, monitor for sellers whose images or videos promote links to their own website or contact information. If a seller suggests that the product will cost less should the buyer reach out to them, this should also raise a red flag. Working with your moderation partner to monitor content closely for any contact information or attempts to drive buyers away from your platform is essential.

5) Blogs & Communities

Providing community guidelines and reporting features are common approaches to content moderation of blogs and online communities. And while these are effective to an extent, it is our experience that relying solely on your community for self-moderation can and will fall short when it comes to protecting your community’s safety and credibility.

  • Lack of Moderation Resources: For anyone who runs a small blog or community, it can be difficult to impossible to monitor user-generated content 24/7. And hiring employees to help review user commentary and other moderation needs is often not in the budget for smaller companies. A moderation solution that offers an automated profanity filter to catch standard offensive content, in addition to any words on your “block” list can be a cost-effective alternative to a big contract. In addition, utilize turnkey image moderation and video moderation services. These services offer lower per-image and video pricing because the moderation teams are shared across multiple projects. Your smaller volumes, combined with many other platforms’ submissions, allow services like WebPurify to offer these cost-effective options for projects with lower budgets. The only drawback to this approach is that the moderation criteria are not customizable for your platform’s specific needs. The shared team moderates all of the projects for the same strict NSFW criteria. You will, however, have an effective service, ensuring that all content is manually checked for the standard “bad stuff.”
  • Off-Brand Content: For a large blog or online community with a healthy moderation budget, you will want to tackle more than NSFW content and profanity. In order to keep the conversation “on brand” you should engage an experienced custom moderation service that offers a hybrid of automated and live moderation solutions. A qualified moderation partner should go beyond simply monitoring blogs and forums, acting as a consultant, helping you best shape your moderation criteria to determine what content and conversations align with your brand, topics, and purpose.
  • Abusive User Posts: Online communities and forums offer anonymity that can lower users’ inhibitions, resulting in trolling and flaming. Trolling is the intentional use of inflammatory language to provoke arguments or conflict, and disturb civil conversations in blog comments, chat forums, or other online communities. And flaming or roasting is the use of profanity or offensive language to produce a negative reaction from the reader. 

Trolling and flaming can create a hostile online environment, leading other users to feel the need to reply to the offender or potentially leave the chat room, online forum, or message board altogether. To protect your online community from harmful, inappropriate posts that may cost you existing members and push away potential members, work with a moderation agency that offers a live team that specializes in monitoring online communities, as well as an automated flagging system to identify potential threats. 

6) Food and Beverage

Online reviews and social media comments are the new word-of-mouth advertising, and since more consumers have access to a growing database of online information, this form of content can either be an effective branding tool or a brand’s undoing. In fact, 70% of consumers are putting more weight into what a fellow consumer has to say about a brand than what the business or brand says about itself in professional marketing content. With this in mind, let’s look at some potential vulnerabilities that any brand in the food and beverage industry should consider and the best moderation criteria. 

  • Brand Trashing: It’s one thing to have your moderation partner scan user-generated photos and reviews, allowing both positive and negative comments that will position your brand as trustworthy. It’s another to permit hostile comments, feedback, and reviews intentionally posted by competing brands. A universal concern across industries, brand trashing can be offensive to your customers and harmful to your brand’s image. There are even instances where it could escalate into a public relations’ nightmare with legal ramifications. This is where a content moderation expert can make all the difference, helping navigate your specific moderation needs and avoid brand trashing.
  • Competing Brands: Differentiating your brand from the competition is essential in the crowded food and beverage industry. In light of this, measures should be taken to keep competing brands from being mentioned or pictured in UGC. This can be mitigated by utilizing moderation services that enable you to block list competitors’ names in text submissions. Additionally, a live moderation team can handle the task of reviewing and rejecting content containing competitor logos and the like.
  • Controversial Content: UGC depicting overconsumption of food and alcohol abuse isn’t as entertaining as its content creators imagine it to be. In fact, such content will reflect poorly on your brand and ultimately impact your bottom line. From chain restaurants to online food delivery platforms, food and beverage brands should use live moderation teams to reject images portraying overeating or excessive drinking. 

For the restaurant industry, user-generated images intentionally featuring overconsumption or obesity can undermine all your marketing efforts, especially if you’re in the fast-food sector. This makes it critical that content moderation includes rejecting certain images, such as an image of a customer sitting at the table in your restaurant, eating 20 cartons of french fries. When it comes to the alcoholic beverage industry, UGC suggesting that alcohol leads to sexual success, as well as any content associated with alcohol and driving or alcohol and minors, will be grounds for content rejection. It’s your responsibility to protect your customers online by finding a moderation solution that can catch any content encouraging overconsumption and communicating misleading promises.

What may be a harmless image to a steakhouse brand, for instance, will likely be offensive if it makes its way onto a vegan brand’s platform. Respect your customers’ expectations when they are interacting with your business or company by using appropriate moderation solutions to identify content featuring products they would find upsetting.

The Takeaway

Each industry has a different audience to protect, its own unique set of UGC challenges to overcome, and special considerations to take into account. To successfully moderate content that can be harmful to your audience and your brand, look for a company of experts that offer custom content moderation plans, advanced AI services, low-cost turnkey services, and dedicated live teams that can be trained on your specific moderation criteria.