There’s No Rewind
April 19, 2019 | Video Moderation, UGCAs reported by the Washington Post, a parent watching cartoons with her son on YouTube Kids found a disturbing clip spliced almost five minutes into the otherwise innocuous video. A man walked onto the screen and mimicked cutting his own wrists, saying “Remember, kids, sideways for attention, longways for results,” before walking back off screen. The video then immediately switched back to the cartoon.
YouTube removed the video after it was reported, but more users have since discovered other cartoons with the same spliced-in clip, in addition to other content giving children explicit instructions on how to commit suicide.
That same month, Matt Watson, a video blogger, posted a video explaining how pedophiles are identifying and sharing timestamps for certain YouTube videos in which children are participating in activities such as playing Twister or doing gymnastics.
He also showed that, if users clicked on one such video, YouTube’s algorithms recommended similar ones. According to Wired, the algorithms don’t just recommend other videos of children playing; they specifically suggest videos popular with other pedophiles.
Many of those videos were also being monetized with advertisements. Companies including Disney and Nestle quickly pulled the plug on their ad spend with YouTube.
It’s not that YouTube drives this awful content, but their platform has made it easier than ever for the most amazing and most abhorrent among us to express ourselves, publicly and with little restriction.
As Ben Thompson wrote a little over a year ago,
“. . .focusing on the upsides without acknowledging the downsides is to misevaluate risk and court disaster. And, for those inclined to see the negatives of the Internet, focusing on the downsides without acknowledging the upsides is to misevaluate reward and endanger massive future opportunities. We have to find a middle way, and neither side can do that without acknowledging and internalizing the inevitable truth of the other.”
The amount of content YouTube needs to police is extraordinary. They’ve built a platform that scaled into a behemoth, without effective moderation strategies in place to grow with them. And the result is not simply lost advertiser dollars or companies’ disgust at being associated with these videos: it’s deeply problematic and horrific content that victimizes children and innocent users.
A reactive moderation approach, depending on algorithms flagging content and reports from users, is unrealistic and irresponsible.
We’ve been approached in the past by potential clients who want us to watch the first few minutes of a video submission or check every 20-30 seconds to moderate content and then assume that the rest is safe, but the savings in dollars simply isn’t worth the risk (or possible reputation damage). For live streaming (the most risky type of UGC) we monitor the videos within 5 seconds of the content going live, and our AI solutions check one frame per second. It’s difficult to get right, and it takes a lot of training. It’s a hard truth, but there’s no effective in-between.
The takeaway: allowing user generated videos on your platform involves risk. Risk to innocent subjects, to users, to brands, to consumers, and to advertisers. That risk needs to be managed as best as possible with UGC content moderation, using a combination of AI and human review.
What we’ve learned over 10 years in content moderation is how to help clients identify the risks they will face when allowing user-generated video content, and figure out a moderation plan appropriate for their needs. Before content volume becomes huge and out of control (or before Apple kicks you from the app store for not having a moderation plan in place), think of us as your consultants, and reach out. Because the genie can’t go back into the bottle, and there’s no rewind.