Request a Demo Blog

The Top 5 Things Missed When AIs Moderate User Generated Content

January 13, 2019 | Image Moderation, UGC

The current argument for AI-based moderation of user-generated content (UGC) seems to be one partly driven by the big social media platforms as they try to keep up with the subversion of their terms and conditions and harmful activities such as terrorism, racist agendas and the spreading of misinformation and “fake news.” But it’s crucial at this point to look to find some perspective.

Facebook and every other large platform moderate massive volumes of user-generated content, made up of multiple forms of media – video, pictures, memes, and text in a multitude of different languages. It’s the sheer volume of this task that has forced the hand of companies like Facebook to employ AI, and it’s by no means an off-the-shelf or a perfect solution. Often, it is not adequate – when used alone – to moderate some forms of user-generated content.

Government Pressures to Censor

Some of the more recent changes on the largest social media platforms are driven by government attempts to control information distributed by terrorist organizations. Such government moves could be seen as shutting the stable door after the horse has bolted.

In many cases, problems with social unrest are just that – social problems caused by government policies, wars, and exploitation in world politics. The fact remains, however, that the spread of unrest has spilled onto social media. The viral nature of shocking images and information is a problem for those tasked with maintaining law and order.

To a degree, these changes and the implementation of AI-based content moderation can easily be the actions of disparate platforms, scrambling to keep up with government demands and threatened sanctions. While the big platforms may have no choice but to compromise fairness, inclusivity, and accuracy in the flood of almost insurmountable volumes of user-generated content, the potential pitfalls for brands that host platforms for discussion, customer input, and feedback, are those of alienation of customers, potential clients, and even ethnic and religious groups.

Avoid the temptation to employ AI-based content moderation where volume doesn’t demand it. Facebook has taught us is that AI isn’t ready for that task and that it may indeed never be able to deal with the complex challenges involved in interpreting the way in which humans interact, debate, and pass comment.

One of the most significant indicators when viewing Facebook’s recent efforts to employ AI from afar is their subsequent need to recruit thousands of human moderators. If we ask ourselves the question, “Who’s watching the watchers?” – the answer is humans. And there are excellent reasons for that.

So, here are the top 5 things missed when AIs are the only ones moderating UGC without any human oversight.

#1 – Written/Spoken Speech and AI Content Moderation

Machine learning is only as active as the training it receives for a specific task. When it comes to moderating interactions between humans on a global scale, such as on the internet, the rules need to be ridiculously complicated to work.

Add to that the challenge of many forms and dialects within just the English language alone, and that it takes humans a lifetime to learn just their local style. This demonstrates that the task ahead of the machine is a profoundly complex and challenging one. To develop a tool that can operate across every form of communication, language, and each dialect is the stuff of fancy.

Natural Language Processing (NLP) tools are trained by humans to understand the type of text they’re examining. However, this understanding is limited to simple rule-making that has a hard time accounting for the complexities of human communications.

Even within single countries, there are varying accents and dialects. These accents and dialects are each unique and different. You must be born and raised in a specific location to have any hope of ever understanding all of what is being said to you or discussed in your vicinity. Phrases and their intended meanings can differ in the space of a few streets and from neighborhood to neighborhood – as can accents. The whole thing can be a minefield of communication misunderstandings.

The present reality of AI-driven content moderation and NLP is that to impart any understanding of these subtle, and multiple, linguistic differences to a machine would require almost inconceivably large amounts of input and training data. And you’d have to find humans to understand it all in the first place.

Broad NLP tools that operate within all areas of the web and monitor all languages and communications are not feasible. AI would have to be more advanced, and the training of each tool would have to be far more localized, specialized, and expensive.

We all know that the possibilities for misunderstandings between people from vastly different cultures are massive in the real world. Online, that potential for offense increases exponentially because the communication is instant and heightened by the fact that discussions, especially on public forums, are conducted between thousands of users at once.

These participants might all be from differing geographical locations and have different belief systems and contrasting moral or civil codes. It’s a powder keg that requires very subtle moderation and great understanding and attention when things get heated, and passions are high.

Moderating human communications is difficult, and it requires a set of skills – but crucially, it also demands real insight and carefully considered judgment. Getting it wrong will nearly always be judged by one party as being biased, or even worse, as unfair censorship.

There’s the potential that many could mistake bad moderation for cultural bigotry. The implications of that are potentially very damaging to a company or an organization providing such a platform.

Forums and social media sites can be great, productive cauldrons of ideas and promote better cultural understanding. They have the potential to be tools for building a better world and improving the way in which we all understand each other.

But that great, ongoing potential brings with it the need for fair and accurate moderation. And enough progress in the way we understand other cultures and share ideas can only happen if the users are mostly all there with the right intentions.

Likewise, no one wants a hand of moderation so heavy that it acts on too broad a set of rules and parameters and excludes those with genuine intentions of good discussion and ideas.

#2 – Sarcasm

It’s an everyday part of the way humans interact, and it’s usually apparent to us when somebody uses sarcasm to make a point. For machines, though, the process of identifying and interpreting the intent of irony is a very complicated business indeed.

AI content moderation relies on masses of data input when applying NLP to the task of moderating UGC. The rules laid down for the process, and the data provided by human programmers is finite and often hard to source.

Let’s look at a statement such as the following:

“The universal TV remote I just bought from Acme Remotes is amazing! It’s so universal that when I attempt to change the channels on my TV, it shuts the blinds.”

Now, you and I might be able to identify obvious sarcasm in that sentence instantly, but what would an AI employing NLP to monitor UGC in a comments section make of it?

NLP works by breaking down the components of a sentence and then analyzing the meanings of words within it. Sarcasm is an area of human expression that, by definition, doesn’t conform to the practice of using literal sense. It creates a micro-context that is foreign to the way in which we communicate conventionally, yet, is a handy form of communication. It’s usually missed or misunderstood by AIs.

#3 – Pictures and Memes

Pictures and memes are a problematic area for AI. Human user generated content moderators look at an image or an image that incorporates some form of text, and they instantly comprehend all the components.

Memes are a slightly more context-reliant and subtle form of visual communication that often involve a generous portion of satire and sarcasm to make a point or a joke. While memes can be powerful tools for both good and subversive intents; they often take even the more seasoned human moderator some study time to work out and to assess, regarding suitability for an online environment.

Let’s consider the example of a picture of a protest in which a protester holds up a sign. Such signs can often be ambiguous when taken out of a known context. An image such as this may appear in a news article about let’s say, an anti-Semitism protest, and a human reader or moderator will instantly recognize an offensive phrase or word on such a protest placard to be a declaration in support of Jewish people.

An AI though, relying on a combination of programming and a series of techniques to first break down the image into components of text and picture elements, and then assessing each aspect separately, without reference to context, might completely miss the point of the message on the sign.

#4 – Lies and Fake News

Fake news. It’s a real modern-day preoccupation. With the rise of social media platforms and a multitude of online news outlets, misinformation has arguably become more prevalent.

Propaganda isn’t a concept that was conceived any time recently – rather than an increase in instances of misinformation, what we’re probably observing is an increase that’s merely proportional to the volume of news and information that the internet has made available to us.

It’s one of the significant indicators of an increasing need for active moderation, though. And all indications so far seem to illustrate that it’s a battle that content moderation by AIs alone isn’t capable of winning.

Beyond overly reactive forms of machine-based censorship and blanket banning of news outlets and monitors on geographical areas known to produce fake news, the task for AI-based moderation of such news content is a very difficult one.

There have been schemes set up whereby AI filters content to pass suspect articles to journalists for review, but this amounts to little more than a glorified (and expensive) filter. It’s the humans that we rely on ultimately, to discern correctly what’s right and what’s bogus.

#5 – Bias

Depending on how you consider the use of technology and computers, this is a weakness in AI moderation that may surprise you. However, it’s not as surprising when you remember that AIs rely on data input from humans. And as when we ‘program’ our offspring, bias gets passed on to some degree or another.

Now, we all know that AI developers are not purposefully inserting their bias into the systems that they create. But we can all agree that it’s unavoidable that some of their preconceived ideas and ideologies get passed to the models.

AI teaches itself based on the data fed to it. There’s even reason to believe that minority groups are worse off under a system of NLP moderation, because also given a broad and representative range of input data, there are proportionately fewer data about minority groups, and therefore less potential for the AI to develop an understanding of them, which can result in a form of bias.

The success rates we can currently expect from NLP based AI content moderation are so weak that to attain a standard that’s acceptable on social media, filters need to be set so high as to apply bias. In the relative absence of any bias, what may still be perceived to be biased to the degree that alienates minority groups? The technology isn’t advanced enough to be accurate and maintain universal fairness.

AI Content Moderation? Caution is Required

There’s no doubt that business owners and those responsible for upholding order on websites, forums, comment sections, and social media need to be careful when choosing how they’ll moderate user-generated content.

There’s a temptation to view the volume of work that the AIs, and, NLP can get through and how quickly they can do it. However, while that attraction is somewhat understandable in some ways where massive volume exists, there are excellent reasons to consider the inaccuracies that are undeniable with such systems and view the results of that as extremely negative.

Thankfully, there are image moderation services available that can help those responsible for upholding a brand’s image while at the same time helping them plan their campaigns and moderation strategy through a smart combination of advanced AI and humans working in tandem.

Unfair moderation, irrespective of the reasons for that unfairness, can quickly alienate customers and users to the point of causing a problem. Its failings, depending on a brand’s use case, should limit the use of AI alone to moderate user-generated content.