In a post last month, we discussed Facebook’s end-of-year goal of instituting an independent oversight board to adjudicate what content will be removed from the site. Here we’ll discuss the recent details that Facebook has provided about its “content jury,” as well as content juries in general, a method that has caught the eye of other organizations.
In fact, Periscope introduced “Flash Juries” several years ago.
Periscope’s “Flash Juries”
In 2016, the live-streaming platform Periscope – which is owned by Twitter – was hastened to rely on independent juries due to the fact that comments appear in real time over livestreams. This left users vulnerable to “hit and run” abuse and moderators overwhelmed.
Periscope’s flash juries deliberate on a comment or comments made during a live stream, instead on the content of the broadcast itself. All it takes is for one user to report a comment as spam or abuse and randomly selected viewers will use majority rule to determine if the comment warrants a ban of the user who posted it. If the jurors deem a comment inappropriate, the offender is banned from posting again for one minute.
If it happens a second time, the offender will lose the ability to comment on the stream. The flash jury’s ruling is of course not the only way the platform gives protection to its users; streamers are able limit who can view the broadcast and/or kick people out.
Arguments in favor of flash juries include:
• It’s harder for a comment to be taken out of context
• The viewers address the issue as opposed to the streamer
• Enforcing a ban is quick and easy
Fully aware that not everyone was going to like this change, Periscope left users the ability to opt out of ‘jury duty.’ What’s more, streamers don’t have to turn on comment moderation at all. Some users like to partake in policing the site they love, while there are others who want the platform owners to be responsible for keeping the environment safe.
While Facebook hasn’t gone as far as to let users rule on content, it has gone into greater detail about how it plans to leverage a content moderation jury.
Further Detail on Facebook’s Oversight Board
Last we checked, Facebook was a bit cagey about how their content oversight board would function. While more information has been released by the company, the details are still a bit fuzzy.
For starters, Facebook’s governing charter sheds light on how members will be appointed to the oversight board. According to Just Security, Facebook will select an “initial cohort of members” who will then select new members going forward. While this suggests that, over time, the board will grow increasingly independent from the company, both Facebook and the public will be able to propose candidates. What’s more, the initial cohort will likely have a lasting influence over how the board grows and makes its decisions.
Just as both the company and the public can propose board candidates, both can nominate cases for review. However, it’s up to the oversight board to choose which cases they will look at; this decision was also made to increase the board’s level of independence. Financially, the board will be made independent via funding in an independent trust that is set up by Facebook. However, it would widely be regarded as a bad move for Facebook to manipulate the budget, which at this point is still unknown.
According to the charter, the oversight board will be given the chance to suggest revisions to content policy. However, in this matter, Facebook is not required to put their suggestions into action.
All members’ names will be made public, but the specific members reviewing a case will be confidential to make sure panelists aren’t targeted for their decisions. The charter claims that, in addition to the official ruling, dissenting opinions and periodic reports will be published. However, the charter does not say that publication is a requirement.
Even as the details of Facebook’s oversight board come into sharper focus, the primary concern seems to be the ability to operate independent of Facebook’s influence. What remains unclear is whether or not this is a truly effective way to moderate content.
The Best Solution?
Using majority rule certainly seems to be an ethical solution to the problems that social media platforms are facing, but depending on who makes up that majority, it could be a double-edged sword – especially on the internet, where many members of subcultures find solidarity. One has to also wonder if it’s the most effective solution.
Jonathan Zittrain, in an article he wrote for The Atlantic, calls into question the oversight board’s ability to weather the storm that is the 2020 presidential campaign. He writes that the board will come up against:
“… the placement of hundreds of thousands of distinct ad campaigns—far more than Facebook’s oversight board could handle either directly or on some kind of appeal. And there won’t be easy consensus—outside of those obviously deceptive vote-next-Wednesday messages—around what’s ‘demonstrably false.’ That’s not a reason not to vet the ads, especially when the ability to adapt and target them in so many configurations makes it difficult for an opposing candidate or fact-checking third party to catch up to them and rebut them.”
Elections aside, how a jury deals with a high volume of cases is a reasonable concern.
Not for Everyone
Facebook may be trying to wash its hands of liability for the content it publishes, but most brands have no choice but to be held accountable for the content they produce. While content moderation juries seem like a stride in the direction of a safer digital environment for major social media platforms, it’s hard not to doubt their effectiveness elsewhere.
Strictly relying on user reporting or tools like flash juries is not enough. Due to the complexities and volumes of content requiring review, a multi-tiered approach must also include artificial intelligence combined with a professional moderation company or in-house team that knows how to leverage automation, while still making informed, human decisions.