Request a Demo Blog

How Facebook is Tackling Moderating Violent Videos

December 3, 2017 | Video Moderation, UGC

Facebook is facing the unprecedented challenge of escalating violence featured within its videos since launching its live video streaming early last year. Alongside carefree videos of friends at football games and parents recording funny things their little kids say are, unfortunately, broadcastings of shootings and suicides. It has been a challenge for Facebook to monitor this upswing in sensitive videos quickly, and the social media giant has at times failed to remove violence or to attach warnings to sensitive content in a timely manner before hundreds and thousands—and in some cases, hundreds of thousands—of eyes have seen them.

Facebook CEO Mark Zuckerberg has publicly acknowledged this moderation issue, and we wrote about it earlier this year in this blog about The Risk of Live Video on Your Website. So, what’s his plan for moderating violent videos? Find out more below.

 

Why Can’t Facebook Get It Together?

It is interesting, considering Facebook’s innovations team that works on everything from augmented-reality contact lenses to computer interfaces that may someday scan your brain to translate your thoughts to text, that it doesn’t already have an efficient system for moderating violent videos down pat. This is in part due to its massive, in-the-billions-per-day volume of user-generated content (UGC). But it also seems to be due to a lack of planning as far as moderating is concerned. As we often find when working with WebPurify clients, there are surprises around every user-generated corner, which is why we work very hard to anticipate what those may be.

 

Facebook’s Moderating Pledge

In response to not only escalating violence, but also hate speech and child exploitation, featured within its videos and posts, Facebook is hiring an additional 3,000 content moderators. Bringing its global operations team to around 7,500, these new hires are focused on reviewing the millions of reports the company receives each week. According to this Forbes article, “To improve the moderation process, Zuckerberg said Facebook is building tools to make it easier for users to report video and other content formats. The company is also working to find ways to shorten the time it takes moderators to determine if content violates Facebook’s policies, and make it easier for reviewers to contact law enforcement if someone is in danger. Zuckerberg said Facebook will work with local community groups and law enforcement to support this effort.”

 

Man Over Machine

In this day and age, one might think that artificial intelligence (AI) could handle the moderation of violent videos, but, as we know at WebPurify, human moderators are still a key and necessary part of the equation. As of now, AI can’t be totally trusted to differentiate things like true and dangerous violence from lesser/socially accepted violence, say Muay Thai boxing or a play presenting a fake violent scene, in UGC.

AI definitely aids in sifting through millions of videos and speeding up the moderation process, helping to flag inappropriate content and escalate it to a human moderator. For example, one position Facebook has within its global team is Escalations Specialist, a role that entails the employee investigating “reported escalations,” reports Fast Company. From our own experience, we can assume that this person watches a flagged video closely for unacceptable content, and from there, either gives the content an OK or removes it based on Facebook’s policies.

 

Facebook Is Not Alone

While it may seem like it has had more issues in this regard than several of its social media counterparts, moderating violent videos and other dangerous and inappropriate content is something all platforms that allow videos have to deal with. Instagram (owned by Facebook) has implemented hashtag blocking to prevent groups from sharing potentially harmful content, such as content that promotes eating disorders, and YouTube and Reddit are amping up their community-based moderation systems. But is that enough? Likely not.

 

WebPurify’s Video Moderation Approach

With many years of experience, WebPurify employs a thorough system to make sure that inappropriate content isn’t seen on our client’s sites. We fully believe in a collaborative approach to moderating videos, including enabling users to flag offensive videos, using algorithms and technology to further tag and filter out abusive content, as well as utilizing a highly trained, extensive human team to review videos and make final judgement calls.

 

Find out more about our video moderation service here. Additionally, here are some of our tips on minimizing the risk of live broadcasting.