Request a Demo Blog

Facebook’s Content Moderation Policy: Our Summary and Opinion

July 9, 2018 | Image Moderation, Video Moderation, UGC

For the first time, Facebook is making public its internal guidelines for content moderation and policy. The 27-page document details how the company defines hate speech, violence, nudity, terrorism, and other banned content.

At WebPurify, we’re quite familiar with how complicated the task of content moderation can be, and how a seemingly simple matter of policy can become incredibly nuanced. So, we’re sharing a summary of some key points from their newly released guidelines, followed by a few thoughts of our own.

Facebook breaks their content policy into six categories, each containing several subcategories:

1. Violence and Criminal Behavior

This section covers credible threats of violence (posts including a specific target along with other pieces of information such as location, timing, or weapons), instructions of how to make weapons or explosives (if the goal is for a violent purpose), hate groups, organized crime, and regulated goods. It lists how Facebook defines terrorist organizations and hate groups, and also bans any post supporting or encouraging those groups.

Note the specific exceptions Facebook needed to make under the section for acts of physical harm to animals: cases of hunting, fishing, religious sacrifice, or food preparation/processing.

2. Safety

Self-harm, suicide, sexual exploitation, bullying and harassment, and privacy violations all fall under the Safety category.

Under Bullying, they’ve specified that pages or groups that attack individuals while appearing to be first person but actually posted by a different individual than the person referenced are banned.

Still with us? These rules are starting to get complicated.

3. Objectionable Content

This section covers hate speech, graphic violence, nudity and sexual activity, and “cruel and insensitive” content.

Their section on hate speech bans attacks on protected classes, as well as extending protections based on immigration status. Facebook further splits attacks into three levels of severity and also provides exemptions for those who use otherwise banned words in a self-referential way or as a form of empowerment.

We’ll let you read the sections of graphic violence yourself if you wish, but know that somewhere along the way, it became necessary for Facebook to specifically address cannibalism.

4. Integrity and Authenticity

Spam, Misrepresentation, Memorialization, and “False News” are covered in this section.

Here, Facebook needs to look at more than just individual post content, but more broadly into accounts with fake names and ages, or accounts that intend to spread misleading information.

False News is the only section we found where Facebook reduces distribution of a flagged post, rather than removing it. Opinion pieces, satire, and political differences have proven too challenging to navigate.

5. Respecting Intellectual Property

In cases of copyright or trademark infringements, Facebook removes content once the rights holder reports the content.

6. Content-Related Requests

This section covers user requests for content removal and additional protections for minors. They specifically allow for government requests for removal of content containing child abuse imagery, as well as parental requests for removal of content involving minors who accidentally went viral.

As a global publisher of user-generated content, Facebook has its work cut out for it where moderation is concerned. They use a combination of artificial intelligence and user reports to identify content that violates their policy, which are then reviewed by one of 7,500 content reviewers. They’ve rolled out a more comprehensive appeals process for certain violation categories, with additional expansions promised in the coming year.

Facebook’s content is almost endlessly diverse. For every precisely defined guideline, even the ones that seem oddly specific and bizarre, there are bound to be human reviewers making judgement calls.

Regardless of scope, the key to effective moderation is exploring the minutiae and defining the point on the spectrum where content moves from acceptable to unacceptable. There are more shades of gray than you think.

Have a question about custom moderation? We’d love to hear from you.