Request a Demo Blog

The many combinations of WebPurify’s services that are used to keep users safe

July 25, 2024 | UGC

User-generated content (UGC) is the lifeblood of any online platform. From social networks to e-commerce sites, UGC drives engagement, fosters community, and adds significant value to your community. But with the rise of UGC comes the challenge of maintaining a safe and respectful environment for your users.

This is where WebPurify can help, offering a suite of content moderation services designed to cater to various and specific needs and use-case scenarios. We work with everyone from startups to one in seven Fortune 100 companies; below we will explore the different combinations of our services that our many clients use to keep their users safe and platforms thriving.

Moments when content moderation matters

Common ‘types’ of WebPurify customers

1. Transactional Users with Basic Needs

A common WebPurify customer is the small business with low volumes of user-generated content that need review. These customers often need content moderation services but operate on tight budgets. At WebPurify we believe in a safer internet for everyone, which is why we’ve created service tiers to meet the needs of all company and budget sizes.

Let’s say you’re a brand new dating site and simply want to vet a few thousand profile images a month. You likely don’t have the resources for a dedicated, custom moderation team, but also might think you can’t afford to delegate this work in a more plug-and-play manner to someone like us. WebPurify’s turnkey live team service was designed exactly with you in mind!

Unlike our dedicated human moderation teams, the Turnkey Live team moderates content for a number of our customers, enforcing a standard set of not-safe-for-work rules, ensuring quick and cost-effective moderation at just 2 cents per image. This approach allows smaller companies to maintain a safe environment on their platform without the burden of high costs implicit in more bespoke solutions.

2. Dedicated Human Moderation Teams

On the other hand, many of our customers require a custom, dedicated human moderation team. These are typically clients with nuanced criteria that are challenging for AI to handle alone due to being subjective or contextual in nature. For example, a company might want to ensure no images of “reckless behavior” or “irresponsible drinking” appear on their platforms. These use cases require a keen human eye to interpret and enforce gray areas. Additionally, human moderators are invaluable for tasks like KYC (Know Your Customer) checks, new profile vetting, and sorting and tagging stock images or videos.

3. AI-First with Human Support

Another category of WebPurify customers is those who start with AI moderation and later on add human oversight to the workflow as their moderation needs evolve. This often happens when a platform offers a new type of UGC that is more challenging to moderate and also more potentially damaging to the brand. For instance, let’s consider a company that allows users to upload images. They might initially use our AI content moderation for this task, but what happens when the company then introduces the new option for users to caption those photos? Suddenly the complexity of their moderation needs has increased. Introducing this sort of new UGC feature is when many customers will add human moderators to their AI moderation.

As companies evolve, they find themselves requiring added accuracy and contextual understanding only human moderators can provide. This approach is especially common in industries where UGC is constantly changing, and staying competitive means continuously updating your moderation strategies.

4. Human-First with AI Scaling

Conversely, some brands begin with human moderation and integrate AI as they scale. For example, the same dating site mentioned in our first example might manage 1,000 images a month with human moderators alone. However, when the volume grows to 10 million images a month, AI becomes essential to handle the added scale. A typical workflow here would be using AI as a “first pass” to tackle obvious violations, then escalating those images that received “gray area scores” from AI on to humans for closer inspection and a final decision. This allows companies to increase the speed with which the process content without sacrificing accuracy and safety.

5. Mixed Services for Different Sub-Products

Certain clients use different WebPurify services for various sub-products within their user base. For example, consider an online video game platform that has separate chat rooms for users over and under 18. Each of these rooms will have distinct moderation guidelines that necessitate more or less strict rules WebPurify’s flexible (and unlimited) API keys allow clients to tailor their moderation approach to these different segments accordingly, ensuring appropriate content standards for each user demographic.

6. Consulting Services: Trust & Safety Expertise

WebPurify doesn’t just offer content moderation; we provide consulting services that impart our trust and safety expertise. This includes helping customers create moderation guidelines, developing workflows, and advising on best practices. Companies often turn to WebPurify for our extensive knowledge in crafting sound moderation strategies that can adapt to evolving content trends and user behaviors.

7. SiteScan

WebPurify’s SiteScan service is crucial for clients who need to review content “by the page”. In other words, as the name suggests, clients who want to “scan” an app or website for large volumes of text, image and video content. SiteScan reviews all of a web property’s pages, including unlimited domains and subdomains, ensuring thoroughness. Bear in mind it’s used as a compliment to, not replacement for, our text, image moderation and AI video moderation models. As it scans, content encountered is fed to these models, which do the actual assessment to determine relative appropriateness..

This is a drastically different approach to moderation in that it’s largely after the fact in comparison to most of our clients’ use cases, which involve realtime review of content as it’s uploaded by their users. It’s no less important, however.

SiteScan is often utilized by companies that acquired another team or business and want to ensure that business’s users’ profiles don’t have proverbial skeletons in the closet in the form of inappropriate bios or posts. Similarly brands that once were more casual about their Trust and Safety but since tightened up their moderation safeguards might rely on SiteScan to find and remove older inappropriate content that slipped through when their content checks were less rigorous.

As with any service involving WebPurify’s AI models, clients are empowered to customize the probability scores at which submissions are accepted or rejected across each category of unwanted content (nudity, weapons, drugs, gore, GenAI etc.), thereby aligning approach with their relative appetite for risk. All returned scores are between 0 and 100%. So, for example, a brand using WebPurify might opt to have an image with a 70% or more likelihood of nudity rejected outright, a score of 40% or under accepted, but scores in the middle escalated to humans for a second, closer, inspection.

8. Pattern Matching

Pattern matching is essential for any brand particularly concerned with IP infringement and that needs to tackle it at-scale. WebPurify’s expert team of human moderators have well-trained eyes and reverse image search tools to expedite such IP checks, but this approach becomes impractical beyond a certain volume, and that’s where Pattern Matching shines.

In simple terms, the service checks submitted content against a “block list” of unwanted images, automatically, and rejects anything found to be an 80% or greater match.

Clients can create and update multiple custom block lists tailored to their specific needs. If you make shoes, for example, and offer customization of certain models, Pattern Matching could be leveraged to prevent customers from uploading logos of brands like Gucci or Louis Vuitton. Pattern matching ensures only original and approved designs are uploaded, maintaining product integrity and avoiding legal headaches.

9. Audio and Video Moderation

In industries with high volumes of video content, WebPurify’s audio transcription and moderation services are indispensable. Clients can opt-in for AI-based transcription of video’s sound, which is in turn run through our text moderation services – all in a single API call. This approach is especially useful for brands that need to ensure both visual and audio content adhere to safety standards, but whose budget or target turnaround time simply doesn’t permit human review.

Oftentimes, as new accounts on a platform earn and maintain solid reputational scores, WebPurify clients make the economic and expeditious choice to forego audio moderation after a time, only reviewing videos’ visuals, or vice versa. The idea being trusted accounts needn’t be as comprehensively reviewed, saving money and hastening processing times.

The Importance of Combining AI and Human Moderation

The dynamic nature of UGC necessitates a flexible approach to content moderation. By combining AI and human approaches, we offer our clients the best of both worlds: the scalability and efficiency of AI with the contextual understanding and accuracy of human moderators. This hybrid approach ensures platforms can handle large volumes of content without having to compromise quality or safety.

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.