Request a Demo Blog

Our Custom Moderation Process: How We Define Rules and Standards for New Clients

January 7, 2019 | Image Moderation, UGC

Custom moderation standards and criteria are dependent on the unique needs of each client. Where the moderation line gets drawn is reflective of individual brands’ audience and culture. Take a peek behind the curtain as we detail the general framework of our approach to launching a new moderation campaign.

#1 – Starting Point

WebPurify’s Standard Image Moderation Criteria covers the common offenders that most brands want to filter: nudity, hate, violence, offensive gestures or language, drugs, and broken images or video. Our AI and moderation teams are fully trained on these standards. It’s a good foundation for any new client to begin building their own moderation rules.

#2 – Customizing and Building on the Basics

Our Standard Criteria might be too conservative for some brands, or not conservative enough, or a combination of both across categories. For example, a client in Europe may find our standard nudity filter too stringent. Clients in US states where marijuana is now recreationally legal may want to relax moderation there, but continue to reject other illegal drugs.

In addition to modifying the standard criteria, some brands target a niche market and therefore need images to conform to rules that have nothing to do with offensive content. As an example, let’s say a new client runs an app for users to share photos of their dogs (we’d download that). They only allow images of dogs. At first glance, the criteria seem simple. Does the content include a dog? Accept. No dog? Reject. But what if a user uploads a painting of a dog, or a wolf in the wild, or their kid’s stuffed toy dog? There’s always a gray area to define.

#3 – Client Review

At this point, the client confers (often with their legal team) and returns with general rules for refinement. It’s a balancing act of protecting the user experience and encouraging participation while also minimizing the risk of content that will offend users and harm the brand.

We ask clients to consider what they know about their users, the expected volume of UGC, budget considerations, and more. It’s an opportunity to take a clear-eyed look at the details and determine priorities.

#4 – WebPurify Clarifications

We return with recommendations, questions, and requests for input on gray areas we’ve identified. We’re as thorough as possible in the beginning stages. Although every brand is different, our experience tells us where some variables exist and which vulnerabilities are associated with different types of UGC campaigns or content. Ultimately, the client decides what rules are set in place, and our teams apply those rules as consistently and efficiently as possible.

#5 – Launch

The consultation phase is complete, the client’s unique rules are in place, and we’re ready to go live. The details and variables that were unpredictable are about to become clear.

#6 – The Feedback Loop

Early in a project, we escalate questionable content directly to the client. Using the client moderation tool built into our dashboard, clients can comment on the images we’ve submitted for their review. Our team uses those comments to further refine our rules and training.

Initially, we may over-reject images as we sort the minutiae and address new variables. As we add nuance and continue training our AI and moderation teams, escalations become fewer.

#7 -Testing and Adjustments

Our QC team studies client feedback and drops in test images to make sure the moderation team is up to speed on new rules.

If a client tells us an image was incorrectly moderated, we can check the unique image identification number and follow up with the individual moderator, while increasing testing around that rule for the entire team.

#8 – Continuing Modifications

As the project goes on and we become more familiar with the varying types of content users are submitting, we continue to tweak and adjust. We prioritize consistency, while addressing new content moderation concerns.