Request a Demo

Frequently Asked Questions

Who should use the WebPurify Photo Moderation Service?

Anyone who is in need of easy-to-implement, live, fast, and efficient photo moderation or sorting services. Our solution can be used to moderate or filter any web-hosted photo including avatars, profile pictures, contest entries, photo album pics, etc.

How does WebPurify’s image moderation service work?

You may use our turnkey criteria and begin submitting photos right away to our live team or contact WebPurify to create custom image moderation criteria. Alternatively, you can use our Automated Intelligent Moderation Service (AIM) to detect more obvious violations in real time.

WebPurify offers three approaches to image moderation, all of which make use of easy-to-integrate APIs. These are:

  • AI moderation instantly checks images across a number of unwanted content categories (nudity, hate, drugs, etc.). This is effectively a real-time review (each image is checked, on average, within 250 milliseconds) and you have the option to customize which categories of content to moderate.
  • Human moderation sends uploaded images to our highly-trained moderators for manual review. Our in-house team reviews each image within 5 minutes (usually closer to 2 minutes or less) against our standard NSFW set of turnkey criteria. For more robust use cases, custom moderation is also available and can be tailored exactly to your business needs to enforce any brand or user criteria you like. We also dedicate teams of moderators to your platform using either our proprietary moderation tools or yours. Processing time varies depending on the number of moderators you retain and the complexity of the rules. To learn more, please contact our sales team.
  • Hybrid moderation blends AI and human-powered moderation in one solution. While our AI models are quite robust and accurately flag many categories of offensive content in images, WebPurify is a longtime proponent of combining AI and human moderation as a means of ensuring the most comprehensive approach to content review. Customers interested in hybrid moderation simply integrate WebPurify’s API and arrange for uploaded photos to be reviewed first by our AI service. Depending on the thresholds you set, the AI will either reject clearly violative content, or the submission will continue on to our human team for a closer look, at which point the image is rejected or given final approval. This second, human step does not require an additional API call.

How does WebPurify compare to using crowd sourcing solutions?

  1. Many companies leaving crowdsourced solutions to eventually become WebPurify clients first approach us out of concern about exposure of their photos to insufficiently monitored freelance workers, who can easily capture and disseminate sensitive content with few if any repercussions.. Privacy in this space is extremely important. Unlike crowdsourced shops, WebPurify’s  highly trained and thoroughly vetted workforce moderates all photos from our secure office 24/7. Among other precautions, we employ a clean-desk policy, anti-screengrab software and and key-card secured work spaces
  2. Again, many of our now-clients expressed disappointment with the crowd-sourced options they’ve used in the past. Common feedback centered on poor quality control, high error rates that prompt resubmitting by users (which in turn incurs even more costs) lack of flexibility and ability/willingness to work collaboratively and enforce custom criteria and – finally – mismanagement of expectations (e.g. misleading claims about language proficiencies or SLA etc.) 
  3. By their very nature, itis difficult to achieve consistent results with crowd sourced solutions. In simple terms, you don’t quite know what you’re getting. Personnel age, geography, work environments, cultural biases and varying amounts of training and experience can combine to cause irregular – or even inaccurate –  enforcement of your brand criteria. What’s more, you’re not in direct contact with the people doing the moderation, and so unable to easily give feedback or communicate your needs, other than simply providing a written guideline; and said guideline might mean different things to different people or be misinterpreted.Working with WebPurify, on the other hand, we take a tailored, consultative approach. By email and on calls, we collaborate with you to define custom moderation criteria and train up members of our team accordingly to work on your use case. Our moderators operate under one roof, enjoy meaningful company camaraderie, use our proprietary moderation tools, and benefit from ongoing training and QC oversight. This investment in our people is reflected in our low employee turnover rate which, of course, means seasoned, experienced talent working on your behalf.  Adjusting criteria or scaling headcount is intuitive and speedy, with all your moderation resources “in one place”. 
  4. Managing workers requires significant overhead and time – especially if they’re scattered across locations and time zones and your moderation needs are 24/7 (most are). Our ImageModeration Service eliminates this burden for our clients, providing a simple “fire-and-forget” alternative..
  5. Our track record speaks for itself. WebPurify works with over two dozen Fortune 500 brands and thousands of smaller businesses, from startups to SMBs. Recognized leaders in the Trust and Safety space for 17+ years, we help many companies that could – or did – try using crowdsourced moderation but ultimately chose us instead. And we appreciate it!

What if I have a large backlog of images that I need to be reviewed?

Not all of our clients use us for real-time moderation, or review of images as they are uploaded by users. We also are more than happy to batch-review backlogs of images that have already been uploaded.

Can you help me with image metadata? Are you able to tag and sort the images you review?

Yes, we offer this content moderation service with our human moderation. Please contact our sales team to learn more.

Do you store the information you moderate? Are the images you review kept somewhere? Is my data secure?

The actual content is never kept on our servers and we never store your data. In its most typical configuration, our service reviews images via URL, where they are hosted. In the case of our human moderators, personnel undergo background checks before hiring and work in keycard-secured environments with a clean desk policy and anti-screengrab software installed at every workstation.

How accurate is your AI-based service?

The short answer is we’re very accurate. On average, across all of our AI image models, we detect more than 98.5% of offensive content. That said, accuracy is somewhat relative depending on which categories of unwanted content you’re having moderated and the rejection thresholds you set for each of them. For example, you might opt to reject any image that has a 50% chance of nudity or above. This would catch all nudity, even the most marginal, but would risk a few more false positives. The opposite (high rejection threshold) would in turn mean slightly lower accuracy but fewer false positives.

How long does setup and integration of the API take?

Integrating our API, whether for AI-based or human image moderation, is a “light lift”. Provided you have a knowledgeable developer, it can be done in an afternoon. We offer ample API documentation on our website and you can always ask for help at support@webpurify.com.

Can you annotate photos for Computer Vision projects ?

Absolutely. Use our Intelligent Humans to train your Artificial Intelligence.

Who performs the live moderation?

Our highly  qualified in-house team. They’re  prepared to handle your requests 24/7, 365 days a year. We never crowdsource or use part-time talent. 

Can I use a combination of your live and automated solutions?

Sure, our AI can be seamlessly integrated with WebPurify’s live moderators so that photos with a high probability of nudity, partial nudity, weapons, hate symbols and more are automatically identified before reaching the human team. In a variant of this approach, some customers instead only opt to “pass along” images to our human team when an initial review with our AI returns an “edge case” score that could use a second look. For instance, with WebPurify, you can easily set a rule saying, “If AI says chance of nudity is under 50%, then accept, over 75% then reject, but if in-between, please forward to humans for closer inspection. It’s worth noting this hybrid approach does not require an additional API call.

What criteria do you use to moderate photos?

By default our live team rejects photos that contain our turnkey criteria:
We define this as any photo that may:

  • contain nudity or partial nudity
  • portray hate or hate crimes
  • contain violence
  • contain offensive gestures or language (English Only)
  • contain drugs, drug paraphernalia, or drug use
  • appear blank or broken

However we also provide custom solutions, allowing you to define your own exact criteria. To learn more, please email sales@webpurify.com.

What criteria does your artificial intelligence check images for?

Our Automated Intelligent Moderation (AIM) service returns a score indicating the likelihood of the following being present in images:

  • Nudity and partial nudity (swimsuits etc.)
  • Weapons
  • Drugs and paraphernalia
  • Alcohol
  • Hate symbols and hateful images
  • Offensive gestures
  • Faces (including gender and whether they’re underage)
  • Gore
  • Celebrity likenesses
  • QR codes
  • Images that are found elsewhere online
  • Gambling
  • Text (whether it’s profane or not)
  • And more!

How long does it take for a photo to be moderated?

Our live team moderates photos within 5 minutes, 24×7, 365 days a year. Over 90% of requests are returned in under 2 minutes. Our Automated Intelligent Moderation (AI) service delivers results in real time.

How can I test the service during development without having to pay for moderation?

We provide two methods for testing. One for testing accuracy and the other for testing integration.

Testing Accuracy:
We provide 100 free image moderations for new accounts. Simply sign up for a free trial and integrate our service into your app or use your web browser. Run out of credits? Drop us a line and we’ll be happy to give you a few more.

Testing Your Integration:
Our sandbox method, webpurify.sandbox.imgcheck, can be used to verify that your integration is working correctly. All photos submitted to our service using this method will bypass our live moderators and be moderated automatically (randomly). The service will hit your callback URL to complete the process. It is important to know that this call may result in false positives as it is bypassing our moderators and simply verifying a successful connection.

How long does setup take for a custom human moderation service?

Our dedicated human moderation teams are full-time WebPurify employees who work solely on your project, 24/7, to enforce custom criteria (within reason, this can include any rules you specify). The high-touch nature of this service and our uncompromising commitment to quality control and accuracy means we require 3-4 weeks’ lead time to properly train our team on your guidelines before going live.

What about copyright and intellectual property infringement? Logo detection?

Given the multitude of brands and logos across the world, we don’t have an AI model that will detect “everything” in this respect. We do offer AI recognition of major corporate brands, logos and slogans, however, and can detect symbols like © or ™. Further, our human moderation teams are able to perform IP infringement checks via a number of methods, whether for artwork, documents, or other products.

Do you offer special pricing for high volumes?

Based on economies of scale, we charge less per image as volumes increase. Contact our sales team to learn more about our volume tiers and price breaks. We consistently find that our pricing is the best on the market.

Can you detect in-image text (i.e. offensive language on t-shirts, etc.)?

Yes—in more than 15 languages that use the Latin (Roman) alphabet. In addition to detecting profanity, we also flag inappropriate phrases and harmful sentiments (e.g., “Kill all _____ people,” etc.)

Does your image service moderate GIFs and similar formats?

Yes, it does.

Do you offer custom models?

Absolutely! We invite you to contact our sales team to discuss your exact needs further.

Do you offer a free trial?

Of course. Sign up any time for a free, two-week trial of our moderation solutions and experience unmatched speed, accuracy, and efficiency in action. If you happen to need the trial period extended, just drop us a line.

More questions? contact us.

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.

Reject photos that contain:
Detect photos that contain:

* Speciality category ($0.0015 per photo)
** Speciality category ($0.06 per photo)

Reject videos that contain:
Other Language Label: If a video complies with our moderation criteria but contains languages other than English, we will label it as “Other Language.”