Request a Demo

Image moderation, supercharged.

Build your brand with confidence using the best in
artificial and human intelligence.

Try it free

Every solution you need to eliminate unwanted visual content.

AUTOMATED AI-powered moderation

Our ready-made, easy to integrate AI models detect images that have a high probability of containing risky or undesirable content, reducing the need for human review. What's more, our solution features simple customization options to tailor itself to your unique brand needs.

Learn More

MANUAL Expert human moderation

Give your moderation the human touch to improve accuracy. Our teams are trained using custom criteria to “read between the lines” and flag violations that typically involve nuance and context.

  • Consistently superior image moderation that scales
  • Use either our turnkey criteria or your custom criteria
  • Seamless API integration gets you up and running fast

Learn More

HYBRID AI + human image moderation

Put the perfect combination of AI efficiency and human insights to work for your brand—and introduce the fastest, most cost-effective and reliable way to moderate user-generated image content.

  • Best-in-class service for your brand and your users
  • Unprecedented speed, accuracy, and efficiency
See How it Works

See hybrid image moderation in action

Select a sample image or upload your own.

Drag or upload your own image here...
Challenge for AI
Challenge for AI
Challenge for AI
Challenge for AI
Challenge for AI
Challenge for AI
Challenge for AI
Challenge for AI
Challenge for AI
Challenge for AI

AI moderation deems this image to be “safe”.

However, inappropriate images can sometimes slip past all AI-based systems. That's why human moderation for context and nuance is critical for many applications.

Submit for Human Review

Will be reviewed for nudity, partial nudity, sexual suggestiveness, hate, violence, graphic/disturbing content, offensive gestures and/or text (English only) and drugs. Customers can define their own custom criteria based on specific project needs.

For demonstration purposes, rejection thresholds for unsafe categories were set at scores of 50% or higher. Please note that, as a customer, you can change these thresholds to be more or less strict.

AI moderation deems this image to be unsafe.

While human moderation is not typically needed in cases like this, as a customer you can opt to have content reviewed by our in-house team, checking against your custom-defined criteria. You have full control of what probability threshold score (likelihood of nudity, etc.) would trigger this human check.

For demonstration purposes, rejection thresholds for unsafe categories were set at scores of 50% or higher. Please note that, as a customer, you can change these thresholds to be more or less strict.

Human review underway...

Reviewing for nudity, partial nudity, sexual suggestiveness, hate, violence, graphic/disturbing content, offensive gestures and/or text (English only) and drugs — all fully customizable to your specific project needs.

This image was approved by our moderators

Start my free 100 photo trial

This image was rejected by our moderators

Start my free 100 photo trial

Moderate all kinds of user-generated images

  • Profile Pictures

    Profile Pictures

  • Social Posts

    Social Posts

  • Stock Photography

    Stock Photography

  • Product Listings

    Product Listings

  • Avatars


  • Product Customization

    Product Customization

  • Agency Campaigns

    Agency Campaigns

Start your free image moderation trial

Image Moderation FAQs

How does WebPurify’s image moderation service work?

You may use our turnkey criteria and begin submitting photos right away to our live team or contact WebPurify to create custom image moderation criteria. Alternatively, you can use our Automated Intelligent Moderation Service (AIM) to detect more obvious violations in real time.

WebPurify offers three approaches to image moderation, all of which make use of easy-to-integrate APIs. These are:

  • AI moderation instantly checks images across a number of unwanted content categories (nudity, hate, drugs, etc.). This is effectively a real-time review (each image is checked, on average, within 250 milliseconds) and you have the option to customize which categories of unsafe content to moderate.
  • Human moderation sends uploaded images to our highly-trained moderators for manual review. Our in-house team reviews each image within 5 minutes (usually closer to 2 minutes or less) against our standard NSFW set of turnkey criteria. For more robust use cases, custom moderation is also available and can be tailored exactly to your business needs to enforce any brand or user criteria you like. We also dedicate teams of moderators to your platform using either our proprietary moderation tools or yours. Processing time varies depending on the number of moderators you retain and the complexity of the rules. To learn more, please contact our sales team.
  • Hybrid moderation blends AI and human-powered moderation in one solution. While our AI models are quite robust and accurately flag many categories of inappropriate content in images, WebPurify is a longtime proponent of combining AI and human moderation as a means of ensuring the most comprehensive approach to content review. Customers interested in hybrid moderation simply integrate WebPurify’s API and arrange for uploaded photos to be reviewed first by our AI service. Depending on the thresholds you set, the AI will either reject clearly violative content, or the submission will continue on to our human team for a closer look, at which point the image is rejected or given final approval. This second, human step does not require an additional API call.

What if I have a large backlog of images that I need to be reviewed?

Not all of our clients use us for real-time moderation, or review of images as they are uploaded by users. We also are more than happy to batch-review backlogs of images that have already been uploaded.

Can you help me with image metadata? Are you able to tag and sort the images you review?

Yes, we offer this service with our human moderation. Please contact our sales team to learn more.

Do you store the information you moderate? Are the images you review kept somewhere? Is my data secure?

The actual content is never kept on our servers and we never store your data. In its most typical configuration, our service reviews images via URL, where they are hosted. In the case of our human moderators, personnel undergo background checks before hiring and work in keycard-secured environments with a clean desk policy and anti-screengrab software installed at every workstation.

How accurate is your AI-based service?

The short answer is we’re very accurate. On average, across all of our AI image models, we detect more than 98.5% of offensive content. That said, accuracy is somewhat relative depending on which categories of suggestive content you’re having moderated and the rejection thresholds you set for each of them. For example, you might opt to reject any image that has a 50% chance of nudity or explicit nudity. This would catch all nudity, even the most marginal, but would risk a few more false positives. The opposite (high rejection threshold) would in turn mean slightly lower accuracy but fewer false positives.

How long does setup and integration of the API take?

Integrating our API, whether for AI-based or human image moderation, is a “light lift”. Provided you have a knowledgeable developer, it can be done in an afternoon. We offer ample API documentation on our website and you can always ask for help at

How long does setup take for a custom human moderation service?

Our dedicated human moderation teams are full-time WebPurify employees who work solely on your project, 24/7, to enforce custom criteria (within reason, this can include any rules you specify). The high-touch nature of this content moderation service and our uncompromising commitment to quality control and accuracy means we require 3-4 weeks’ lead time to properly train our team on your guidelines for explicit images before going live.

What types of inappropriate images does WebPurify’s AI-based image moderation flag?

We offer a range of AI models that are trained to detect and remove:

  • Nudity and partial nudity
  • Weapons
  • Drugs
  • Alcoholic Beverages
  • Hate symbols and hateful images
  • Offensive gestures
  • Faces (including gender and whether they’re underage)
  • Gore
  • Celebrity likenesses
  • Gambling
  • Blank or broken images
  • Text (whether it’s profane or not)
  • And more!

What about copyright and intellectual property infringement? Logo detection?

Given the multitude of brands and logos across the world, we don’t have an AI model that will detect “everything” in this respect. We do offer AI recognition of major corporate brands, logos and slogans, however, and can detect symbols like © or ™. Further, our human moderation teams are able to perform IP infringement checks via a number of methods, whether for artwork, documents, or other products.

How long does it take to process an image?

AI-based processing is essentially real time (~ 250 milliseconds per image). Standard processing by our human image moderators typically takes 2 minutes, though occasionally longer (up to 5 minutes, but not more).

Do you offer special pricing for high volumes?

Based on economies of scale, we charge less per image as volumes increase. Contact our sales team to learn more about our volume tiers and price breaks. We consistently find that our pricing is the best on the market.

Can you detect in-image text (i.e. offensive language on t-shirts, etc.)?

Yes—in more than 15 languages that use the Latin (Roman) alphabet. In addition to detecting profanity, we also flag inappropriate phrases and harmful sentiments (e.g., “Kill all _____ people,” etc.)

Does your image service moderate GIFs and similar formats?

Yes, it does.

Do you offer custom models?

Absolutely! We invite you to contact our sales team to discuss your exact needs further.

Do you offer a free trial?

Of course. Sign up any time for a free, two-week trial of our moderation solutions and experience unmatched speed, accuracy, and efficiency in action. If you happen to need the trial period extended, just drop us a line.

See all FAQs

Ready to get started?

Start Your Free 100 Photo Trial
Reject photos that contain:
Detect photos that contain:

* Speciality category ($0.0015 per photo)
** Speciality category ($0.06 per photo)