Request a Demo Blog

YouTube’s Video Moderation Strategy Faces Challenges

January 22, 2018 | Video Moderation, UGC

There’s no doubt about it—having a strong video moderation strategy is good for business. When you think of video moderation for your website or app, you may first and foremost think of its purpose of shielding customers and innocents from objectionable content, and that’s a completely valid and necessary function of content moderation. But what you may not immediately think about is how it also protects your bottom line. If you have a site that allows advertisers, this is especially the case.

Here are a few examples of social media behemoth YouTube’s struggles with their video moderation strategy that have led to major lost dollars and cents:

Algorithms Don’t Always Get it Right

Machines are smart, but they are not the be all and end all when it comes to video moderation. We use machine learning and know AI is necessary now and will continue to grow stronger in the future; however, algorithms can backfire and humans still play an essential role where video moderation strategy is concerned. Take YouTube’s most recent faux pas. Google, parent company of YouTube, posted an ad for its new Chromebook Pixel on the social site, which got flagged as spam by its own algorithm. As Verge points out, “It’s great that Google is building tools to automatically flag and remove deceptive videos and weed out spam, but if the end result is a black box that just arbitrarily makes decisions that even Google’s content isn’t safe from, then who is it really helping?”

 

An Inefficient Review Process

A similar video moderation flub happened recently to some of YouTube’s top “creators” who were posting videos showing the unveiling of the new iPhone X. Ads are a huge revenue source for many YouTubers and, for some reason, these iPhone X videos were flagged by YouTube’s algorithm as “unsuitable for advertisers.” This demonetized the videos, affecting both their users’ income and their own, and angered creators (also bad for business). YouTube’s fix? If a creator appeals the automatic moderation judgement call, the video receives an automatic human review. Unfortunately for those content creators, time lost during the appeal process is money lost—why not have a video immediately reviewed by a human before demonetizing it?

 

Losing Advertisers

Like many of the large social media platforms, YouTube has run into a dilemma of removing the posting of offensive content vs. allowing it for free speech. The challenge here, though, is perhaps less about moderating those videos, even ones that the average person may deem completely inappropriate, and more about YouTube getting its act together where advertisers are concerned. Why pair any advertiser with this kind of crude content? Recently, brand managers of major brands, such as Walmart, Verizon and Pepsi, have pulled their ads after finding that they were appearing next to videos that included hate speech and extremist views. “They need to get better at the management of what is brand-safe and what isn’t,” says Gabe Winslow, of the digital marketing agency Ansira, in this Guardian article. While YouTube does offer advertisers the chance to pick what types of videos they are OK with being associated with based on keywords, it seems as though the platform hasn’t delivered on this expectation. It could have something to do with the sheer number of videos that it is dealing with—around 400 hours are uploaded each minute. As big brands continue to boycott it, it will be interesting to see what YouTube and Google do to revise their advertising and video moderation strategy to protect not only their reputations, but their revenue streams.

 

What Can We Learn From This?

First, let us point out that we at WebPurify are always learning, paying close attention to the various fails and successes of moderation programs to help us improve our approach. We would never suggest that we could magically swoop in and solve all of YouTube’s challenges. What we can do, with over 10 years in the content moderation space, is help clients identify the risks they will face when allowing user-generated video content, and figure out a video moderation plan appropriate for what their commercial agenda is. Opening the floodgates to user-generated content without a scalable plan is risky and irresponsible. We can essentially help you avoid these YouTube scenarios of advertisers pulling budgets and sabotaging your own company’s campaigns by anticipating the potential blind spots that may lie ahead of you.

Find out more about our moderation service and video moderation service. Plus, see how Facebook is Tackling Moderating Violent Videos here.