Request a Demo Blog

Crowdsourcing For Image Moderation – What are the Dangers?

August 10, 2023 | Image Moderation

Concept of being constantly attached to an electronic device into the early hours

In the dynamic world of content marketing, it’s critical to strike a balance between encouraging user engagement and maintaining a safe, welcoming digital environment. If you’re gearing up to launch a new content marketing campaign and mulling over the use of a crowdsourced solution for moderating user-generated images (UGC) on your platform, we urge you to hit the pause button and give this decision some serious thought.

And this is true whether you’re spearheading an aggressive content marketing campaign, cultivating an online community, or curating a gallery of user-generated content (UGC).

The prospect of harnessing the collective power of the crowd to filter through UGC may initially seem enticing; however, there are crucial reasons why this method poses a significant risk to your brand reputation.

Before we dive into these potential pitfalls, it’s essential to ensure we’re all on the same page about what exactly crowdsourcing entails. How does it operate within the realm of image moderation, and why should it be treated with a healthy dose of caution? Let’s unpack this together.

What is crowdsourcing?

Flashback to the mid-2000s: crowdsourcing was a hot new concept that many claimed would revolutionize the business world. Simply stated, crowdsourcing is where a company taps into the efforts of large online groups of people to accomplish various tasks. In other words, labor is sourced from a crowd.

If you’re running a content marketing campaign and considering using a crowdsourced solution to moderate user-uploaded images (UGC) on your website,  we strongly suggest that you reconsider.

Jeff Howes, who popularized the term in his Wired.com article, The Rise of Crowdsourcing, even goes as far as to suggest that crowdsourcing is “driving the future of business.”

But is crowdsourcing really living up to the hype?

Does Crowdsourcing Work?

Yes and no. There are certainly many crowdsourcing success stories.

Perhaps the most famous example of crowdsourcing is Wikipedia, the world’s largest encyclopedia that boasts over 26 million articles created and maintained entirely by volunteers. Anyone can anonymously edit a Wikipedia article — therein lies both its power and its shortcomings. But we’ll get to that in a moment.

Microsoft is probably the most well-known corporate enterprise to successfully make use of crowdsourcing. For the beta test of Office 2010, Microsoft allowed 9 million people to download and test the software — and received over 2 million comments.

Another example is NASA’s Kepler program, which found a way to harness the power of the crowd with a game called Planet Hunters. It enables amateur astronomers to detect new planets, and thus far volunteers have discovered 69 potential new worlds  after playing 3 million rounds of the game.

The Dangers of Crowdsourcing

While crowdsourcing has yielded powerful results for online businesses and organizations, it has also proven destructive. While some examples are rather humorous, others have had dangerous consequences.

A very poignant example was the investigation of the tragic Boston bombing. The FBI crowdsourced the identification of the two men responsible, and reddit users began their own investigation in a subreddit called “findbostonbombers”. After several false positives on the group led to innocent people and their families being harassed and threatened, the founder of the subreddit declared the effort a “disaster” and “doomed from the start,” adding that he was “naive” to think it could work. He closed his reddit account shortly after the failed effort.

But why did this crowdsourcing effort fail so miserably? The reason is actually quite simple.

Anonymous, Untrained and Unaccountable

Crowdsourcing, despite its initial allure, frequently falls short in delivering reliable results, especially when it comes to tasks as sensitive and critical as image moderation. Crowdsourcing often fails because volunteers that participate are anonymous, untrained and unaccountable.

1. Anonymity
Participants in a crowdsourcing scenario are often hidden behind the veil of the internet, operating under nebulous usernames or online profiles. This anonymity makes it difficult, if not impossible, to verify the identities or qualifications of those involved in your project. The lack of transparency can severely compromise the integrity of the moderation process.

2. Lack of Training
Crowdsourced contributors, in most cases, lack specialized knowledge or training in the task they are recruited to perform. In the context of image moderation, this can lead to inconsistent interpretation of guidelines, misjudged decisions, and ultimately, a compromised user experience on your platform. Inconsistencies and inaccuracies are rife when untrained individuals are trusted with complex or nuanced tasks.

3. No Accountability
The anonymous nature of crowdsourcing leaves little room for enforcing accountability. This is particularly concerning when mission-critical tasks or sensitive data are involved. With little to no consequences for erroneous actions or violations of terms, crowdsourced contributors may not exercise sufficient vigilance or care, thereby jeopardizing your platform’s safety and reputation.

To understand why this is so problematic for image moderation, let’s take a look at the steps involved.

How Does Image Moderation Work?

Let’s look more closely at the image moderation process to better understand its intricacies and the crucial role each step plays. Generally speaking, image moderation services work as follows:

1. Image Submission
The process begins when you submit the URL of an image to a moderation service. Provided the service is API-based – and most are – this is termed “calling” the API. Most typically the images in question are uploaded by users to share with an online community, in chat or to update their profile and so forth.. The act of submission implies a request to have these images scrutinized to ascertain whether they align with your platform’s content policies.

2. Image Review
This is the core part of the moderation process. The submitted images are carefully reviewed either by individuals, automated systems, or a combination of the two depending on the service you’re using. The purpose of this review is to assess the images against your predefined standards and criteria, looking out for any inappropriate content, offensive material, or anything else you deem unacceptable.

3. Feedback Provision
Once the review process is complete, the results are returned to a callback URL (this is the API “response”) you have provided. This feedback usually includes the status of each image, along with details on the content found (if any), and sometimes additional tags for context (ex: “bare chest” or “hate symbol”). It’s worth noting that most image moderation models’ results are represented as percentage likelihoods of content across categories. In other words, among other info, each image’s returned results will include scores between 1 and 100% for nudity, weapons, drugs etc. The higher the percentage, the greater the chance a type of content is present. This detailed communication ensures you understand the decisions made during the review process and provides valuable data with which you can tease our trends in your UGC while identifying users/accounts that are repeat offenders.

4. Approval or Rejection
Based on the feedback received, your platform or team makes the final decision to either approve or discard the image. Approved images are published and become visible to your users. Rejected images, on the other hand, are either deleted or quarantined for further review or even, in egregious cases, forwarding to law enforcement.

The process above might seem straightforward, but it demands a keen eye, expert judgment, and consistency to maintain the quality of content on your platform. Trusting these important steps to an anonymous, untrained crowd could lead to oversights and serious fallout.

The Role of AI and Machine Learning in Professional Content Moderation

In contrast to crowdsourcing, professional content moderation services like WebPurify integrate artificial intelligence (AI) and machine learning to streamline and enhance the image moderation process. These technologies allow for rapid, efficient screening of vast amounts of user-generated content, quickly identifying and flagging potential issues.

While crowdsourced moderators might struggle to keep up with high volumes of content, AI algorithms can process information at incredible speeds, drastically reducing the time to detect and remove unsuitable content from a platform. This creates a safer, more secure online environment, thus significantly improving the user experience.

Additionally, unlike crowdsourced workers, machine learning algorithms improve over time. They continuously refine their performance, becoming increasingly adept at recognizing patterns of harmful content, thereby minimizing false positives and false negatives.

Human and AI Moderation: The Best of Both Worlds

Crowdsourced content moderation often suffers from a lack of accountability, inconsistency, and variable quality. In contrast, professional content moderation services like WebPurify leverage a balanced mix of AI and human intelligence, ensuring an effective and reliable moderation process.

At WebPurify, we scale our model to fit your business’ needs. Generally, we recommend a mix of both AI and human moderators. With this approach, AI algorithms serve as the first line of defense, handling high-volume image screening at incredible speeds. Said AI will outright reject obvious violations while allowing images that are safe or whose scores occupy a gray area to pass through for additional human review. In this way, the overall workload for manual moderating is pared down..This initial layer of moderation is something crowdsourcing cannot effectively provide due to the immense throughput of content.

Following AI screening, our highly-trained human moderators review what content remains. Most will be safe, but some will have AI scores that warrant careful inspection.. They take context into account and make more complex judgements, tasks that an AI alone can’t fully accomplish. This symbiosis of AI and human intelligence offers a level of nuance and precision beyond the reach of a crowdsourced approach.

Why is Crowdsourcing Image Moderation So Dangerous?

With crowdsourcing, there are no experts involved in the moderation (step 2 above). Instead, everyday people are assigned the task of approving or rejecting the images uploaded by your users. These moderators are anonymous, untrained and unaccountable.

As such, a business should have very little confidence that these moderators:

  • will moderate according to the proper criteria
  • won’t steal the images or distribute them online
  • won’t allow false positives or negatives due to ignorance, laziness or malice
  • Will maintain a secure, discreet work environment that upholds users’ privacy

Let’s take a look at how each of these foibles can hurt your business and your users.

Moderation Criteria Can Vary Greatly

What is an acceptable image to your company? Is that perception the same for someone outside of your organization that’s completely unfamiliar with your company norms?

Because standards can vary greatly from one moderator to the next, and moderators can’t be trusted to know or take into account end-user customer’s audiences, there is simply no way to be confident in consistent enforcement. Here are a few examples, but this list is by no means exhaustive:

Sexually-explicit material
For a highly-conservative faith-based organization, pictures of women in bikinis may not be acceptable for their website. However, an untrained moderator may not realize this and approve the image simply because it contains no nudity.

Profanity Filtering
Moderators, without proper training, may not recognize certain slang words as being offensive and can overlook recently sullied terms that – though not profane – have assumed a new, bad meaning due to current events and memes.

Competitor brands
When hiring moderators to ensure that competitor brands stay off your website and brand bashing – in either direction – is stopped and removed, untrained moderators may not fully understand your competitive landscape and miss certain company names.

Images are Vulnerable to Theft and Unauthorized Distribution

Because crowdsourced moderators are anonymous, they’re unaccountable. While most aren’t willfully malicious, there’s nothing stopping one from stealing your users’ images with few, if any, repercussions. Once images are taken, they’re liable to be shared with (or sold to) others and uploaded elsewhere.

Unfortunately, once your images are out in the open, there’s no way to stop their distribution and control is definitively lost. This represents a major violation of your users’ privacy, and can lead to permanent and irreparable damage to their lives. What’s more, this can severely impinge your company’s reputation, and lead to legal consequences.

In the case of image moderation, an ounce of prevention is worth more than a few pounds of cure.

No real-time moderation

In the high-speed world of user-generated content, swift action is critical. The capacity to moderate content in real-time or near-real-time isn’t a luxury, it’s a necessity. Crowdsourced moderation, with its inherent delays and inconsistencies, leaves your platform vulnerable, allowing harmful content to linger and potentially disrupt your community.

Professional content moderation services like WebPurify understand the importance of time-sensitive moderation. Our robust team of human moderators and advanced AI systems work around the clock to identify and remove inappropriate content promptly. This allows us to safeguard your platform’s reputation, user experience, and overall safety, providing an unrivaled level of service that crowdsourcing simply can’t match.

The Benefits of Professional Image Moderation Services like WebPurify

The benefits of opting for a professional image moderation service like WebPurify far outweigh the unpredictable and risky nature of crowdsourcing. Our moderators are not only experts in understanding and maintaining your organization’s specific moderation criteria, but they also ensure that your platform remains safe from inappropriate content, thereby preserving your brand image.

Unlike crowdsourced moderators, who may be working in uncontrolled environments, WebPurify’s team operates under stringent security measures. This ensures your user-generated images are not susceptible to theft or unauthorized distribution.

With professional services, you also get a high level of assurance against false positives and negatives. Our oversight and accountability mechanisms are a stark contrast to the lack of control and consistency often seen in crowdsourced efforts.

As we adapt to technological advancements and continuously improve our services, WebPurify remains committed to providing the highest standard of content moderation, a commitment that simply can’t be matched by a crowdsourced model.

Data Privacy: A Major Advantage of Professional Moderation

In the age of data breaches and privacy concerns, entrusting user-generated content to anonymous, crowdsourced labor presents significant risks. Conversely, professional moderation services like WebPurify prioritize data privacy and protection, providing peace of mind for both you and your users.

WebPurify ensures strict compliance with major data protection regulations, including the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Unlike in a crowdsourced model, where user data could potentially be exposed to hundreds or thousands of anonymous workers across numerous geographies, our uncompromising standards safeguard your users’ privacy and maintain your brand’s trustworthiness.

Quality Crowdsourced Image Moderation is Impossible to Maintain

When image moderators are untrained and unaccountable, there’s very little to prevent them from approving images that definitely shouldn’t OKed, or rejecting images that are benign and acceptable.

There are numerous reasons this can happen. Here are a few examples::

Ignorance
As we mentioned earlier, the untrained moderator may not have a firm grasp of your moderation criteria.

Laziness
They could also simply be approving or rejecting images out of laziness, to quickly clear the image moderation tasks from their work queue.

Malice
If the person tasked with moderation is opposed to an organization’s religion, demographics or dislikes a company, they may intentionally moderate the images incorrectly out of spite or protest. Even with robust screening and meaningful interview processes, let alone crowdsourcing from afar, it’s difficult to detect if an individual harbors such convictions.

Missing the bad, Flagging the good
In conferring with our clients about their previous crowdsourcing experiences, many of them report that false negatives and false positives happen with a frustrating and regrettable frequency.

So How Should You Moderate Images? Trained & Supervised Image Moderators Are Key.

If it’s important to you and your company that your images don’t become stolen, that strict moderation criteria is maintained and that false positives & negatives are virtually eliminated, crowdsourced image moderation software is a dead-end.

Short of hiring, training, scaling and equipping a team from scratch, your one reliable alternative is a professional image moderation service staffed by highly-trained, adequately supported and strictly-overseen moderators.

  • Distinct from crowdsourcing, a professional image moderation service can ensure with high levels of confidence that:
    Your organization’s custom moderation criteria are strictly adhered to, and rolling out updates to said criteria is done seamlessly and quickly.
  • Moderators cannot steal or distribute your users’ images since they’re using company computers with restricted and monitored internet access plus anti-screengrab software and a “clean desk” (no phones) policy.
  • Moderators are dedicated to your project and working for a known entity. In other words, you know where your moderation vendor’s people are located, can rest assured they’re not moonlighting elsewhere and can take comfort in the fact their attention isn’t unduly divided.
  • The moderators whose services you’re relying on are themselves being treated well. Content moderation is a crucial but taxing job. It’s rewarding for everyone involved when done right, but unscrupulous providers – especially crowdsourced shops – can be exploitative. Any professional moderation service of good repute will document and disclose the resources they make available to their employees insofar as mental health professionals, rest periods and comfortable work environments.
  • False positives and negatives are eliminated since the moderators are better trained, better incentivized, better managed and chose this job as a full-time career, not part-time gig.

To protect the privacy of your users, to ensure children aren’t exposed to harmful material, and to safeguard your company’s brand and reputation, we strongly recommend that you avoid crowdsourcing for image moderation. The risks far outweigh any perceived upside. Instead, we suggest you evaluate a professional image moderation service like WebPurify.