Request a Demo Blog

Facebook Updates Policy for Suicide Prevention

October 18, 2019 | Image Moderation, UGC

 

The last year has seen consistent adjustment of content moderation policies on the part of social media giants. Recently, we discussed the changes that Twitter and Instagram’s policies underwent. Just last month, Facebook announced a change in its practices. Around the same time, Mark Zuckerberg spoke about his involvement in determining policy as well as the company’s goal for judging which content should be removed from the site.

Here’s what’s been going on with regard to how the company handles content moderation.

Policy Updates

On September 10th of this year, World Suicide Prevention Day, Facebook announced it would be changing its policies to ban content about suicide, self-harm, and eating disorders. According to representatives at the social media giant, the change resulted from what they learned after consulting with subject matter experts from all over the world.

The largest takeaway from the experts seems to be that content involving suicide or self-harm can promote similar behavior, even if that isn’t the intention. Studies have shown an increase in suicide when it has been the public eye, a phenomenon that has been compared to contagious disease, with outbreaks sometimes occurring in clusters. According to the New York Times, “Analysis suggests that at least 5 percent of youth suicides are influenced by contagion.”

Facebook will no longer allow graphic images of cutting and will also display a “sensitivity screen” over images of healed self-harm wounds. Similarly, images that unintentionally promote eating disorders, such as protruding bones or a concave stomach, will be banned. What’s more, any content that promotes unhealthy weight loss will be banned.

On top of these latest restrictions, the social media company is currently hiring a Safety Policy Manager – someone who will be in charge of analyzing how users’ health and well-being is affected by Facebook – and making more resources available to users. For instance, #chatsafe is a resource that people can use to respond to suicide-related content or simply talk about their experiences with suicidal thoughts or similar subject matter.

These updates come at a time when the public as well as U.S. lawmakers are wondering what exactly Mark Zuckerberg’s role is in solving the problems that have cropped up in a world increasingly influenced by social media.

How Involved Is Mark Zuckerberg?

Early last month, Mark Zuckerberg claimed both he and Facebook COO Sheryl Sandberg play a big part in content moderation decisions. The executives’ involvement in policy decisions has come into question since the company has been feeling the pressure of government regulators. While Zuckerberg and Sheryl maintain that they are “incredibly involved,” the company hasn’t been particularly forthcoming about what this means.

Perhaps the most light shed on it has come from Monika Bickert, Head of Global Policy Management at Facebook. “With anything that is very big that a lot of people are talking about, we will absolutely loop them in,” says Bickert. “We will, at the very least, send an email up to Mark and Sheryl so that they know what’s going on.” What remains vague are the actions that these executives take and how much they want to be involved in these decisions.

The fact that Zuckerberg is participating in policymaking at all represents a huge shift in how companies view content moderation, one that WebPurify’s Director of Sales and Client Services, Joshua Buxbaum, has witnessed firsthand.

“When we started in the content moderation business over 12 years ago, many companies didn’t even know what ‘content moderation’ was. And those who did often didn’t see it as important no matter how much we stressed it. Now, it’s at the forefront of many companies’ agendas.”

Facebook’s Future Oversight Board

While the company has decidedly left unclear the level of Zuckerberg’s involvement, it has commented explicitly on its goal to give control of content moderation policy to an oversight board made up of outsiders. Back in November of 2018, Mark Zuckerberg himself wrote a blog post that discussed this very initiative:

“First, it will prevent the concentration of too much decision-making within our teams. Second, it will create accountability and oversight. Third, it will provide assurance that these decisions are made in the best interests of our community and not for commercial reasons.”

Recently, Facebook gave more shape to this plan. With a projected 40 members serving terms limited to three years and a maximum of three terms, Zuckerberg claims the intention is for the board to work like an appeals court. That is, users will bring their appeals to the board and its 40 members will decide what content is henceforth banned.

This oversight board suggests a devolution of power, but even the man who brought the idea to Facebook in the first place, Harvard Law School professor Noah Feldman, knows it won’t be a perfect system. “Ultimately, you can’t create a board that is sufficiently representative of the 2 billion people who use Facebook,” admits Feldman. The issues they will be dealing with will be controversial and complicated and are sure to be subject to disagreement based on perspective.

Finding board members who understand the nuanced societal and cultural contexts of the cases they will oversee is easier said than done. One way that Facebook plans to overcome this hurdle is by appointing experts on cultures, languages, religions, etc. to help the board deliberate.

How Soon Will It Be Ready?

Facebooks aims to have a fully functional oversight board in place by the end of this year. As progressive as making these conversations more public is, the company will still maintain some power over policymaking. It also begs the question: Are company executives looking to lessen the burden of responsibility in an effort to turn down the heat from regulators in Washington?

Regardless, Facebook’s initiative speaks to the importance of pairing human intelligence with technological tools to most effectively moderate content. What’s more, addressing the nuances of user content with a wide range of perspectives seems like a step in the right direction.