Request a Demo Blog

From screens to statutes: Dylan Moses discusses his views on the evolution of content moderation

June 7, 2024 | Careers

The growth of UGC and an increasingly politically charged environment mean the line between safeguarding community standards and infringing on free speech can seem perilously thin. Content moderation stands at the heart of this debate, often misunderstood and misrepresented in broader public discussions. To shed light on this critical distinction, we spoke with Dylan Moses, a veteranTrust and Safety professional with hands-on experience at major tech companies, and a Founding Fellow at the Integrity Institute.

Currently finishing his JD at Harvard Law, where he also serves as a fellow at the Berkman Klein Center for Internet and Society, Moses recently wrote a defense of content moderation for Tech Policy Press, offering a unique blend of practical insights and scholarly perspective on the challenges and nuances of content moderation and how it is in no way a form of censorship.

Our conversation explores the intricate balance of maintaining open digital forums while preventing harm, ensuring that the efforts to protect users enhance, rather than impede, free expression.

From screens to statutes: Dylan Moses discusses his views on the evolution of content moderation

Tell us a bit about your work in the trust and safety space to date? What makes you so passionate about it?

Dylan Moses: I’ve been working on trust and safety issues for over six years. For three of those years, I focused on content moderation at big platforms like Facebook and YouTube. I started off as a market specialist at Facebook for about a year, and then I moved over to YouTube to focus on policy management for about two years. I dealt with issues around hate speech, misinformation, and violent extremism. For the last three, I’ve been in law school trying to make it all make sense.

What makes me passionate is the realization of how these online issues impact equity and freedom of speech, seeing the real-world consequences of hate speech, and understanding the power dynamics in discourse online.

At first I didn’t realize I would be as passionate about the work that I was doing, but I think day in and day out, when you’re working on the content moderation issues like the ones I experienced on the inside, you realize that they raise a lot of fundamental issues about who gets to speak and say what on the internet.

It also opened my eyes to seeing how some people in majority groups or even public officials, have the power, through their discourse, to marginalize others. Being a person of color working on hate speech, it’s always really interesting to see how speech patterns change and affect certain folks online and off platform and some of the real world effects of the hate speech that you see online and how it translates offline too.

What motivated you to pursue a law degree, and how do you plan to use it? Is it safe to assume you’re aiming to combine your legal training with interpreting and upholding laws centered on trust and safety members?

Dylan Moses: I had always been thinking about going to law school, but the turning point for me was the Christchurch terrorist attack in 2019. I was a part of a team that was moderating content around high-profile hateful incidents, and that was one of the most dangerous, most high-profile incidents the platform had seen to date. That day had a big impact on me, and that event highlighted a need for updated policies governing platform responsibilities.

Law school then opened my eyes to Section 230 and broader issues like the First Amendment, privacy, and antitrust laws. Post-law school, I aim to work at a law firm focusing on these areas, especially regarding social media and First Amendment rights.

What do you see as the immediate policy priorities in trust and safety?

Dylan Moses: Currently, there’s a big emphasis on artificial intelligence in the content moderation landscape, which I believe is crucial. Previously, much of our focus was on developing policies where none existed, particularly around defining the boundaries of acceptable speech to ensure we did not unjustly censor legitimate viewpoints. However, we’ve now reached a point where foundational policies, such as those concerning hate speech and terrorism, are well-established and largely static. That is to say, when we see hate speech or terrorism now, we generally know how to categorize it and how to enforce it consistently with precedent and the company’s values – though of course nuances remain.

This shift means that while the basic frameworks are in place, the challenge now lies in applying these policies consistently and effectively, leveraging AI to manage the vast scale of content.

One immediate priority is the integration of new artificial intelligence models to aid in content moderation. It’s essential that these tools align consistently with our existing policies and are free from bias and discrimination. We must also ensure the durability of these tools over time, maintaining their effectiveness and fairness as technology evolves.

Another very important priority is the transparency of our content moderation processes. Historically, there’s been hesitancy to fully disclose the mechanics of policy enforcement to avoid manipulation by bad actors, but given the societal impacts from events like the Capitol riot, I believe the need for transparency has grown. Platforms need to have a more open discussion about how their policies are applied and enforced, and the bells and whistles used in the background to promote and prioritize certain content. This involves shedding light on the algorithms that shape user environments and the moderation processes themselves. I think this sort of transparency is important not only for user trust but also for regulatory scrutiny and understanding.

In the US, efforts like the proposed Platform Accountability and Transparency Act aim to foster this transparency. These initiatives are designed to make platform operations more understandable to the public and lawmakers, helping them see how decisions are influenced and enforced.

And I think it’s important to reassess existing regulations like Section 230. The legal framework needs to evolve to balance platform rights with user safety, ensuring platforms can’t completely shield themselves behind First Amendment defenses or terms of service agreements, when issues arise that lead to collateral, harmful effects on users, and the platform didn’t take reasonable measures to curb those effects.

The conversation around the First Amendment is also intensifying, with discussions about how platforms’ rights might differ from those of individuals. We’ve had a hundred years of First Amendment jurisprudence at this point, which has generally served everyday individuals like you and me in everyday contexts.

However, when these established legal frameworks are applied to online platforms, a complex challenge arises. Due to the protections afforded by the First Amendment, Section 230 of the Communications Decency Act, and various contractual terms of service, platforms often become virtually immune to lawsuits and regulation. This tri-part legal shield creates a unique environment where traditional approaches to governance and accountability are not directly applicable, leaving a gap in our ability to regulate platform operations effectively.

And don’t get me wrong, the First Amendment, Section 230, and contract law are great, and there are plenty of good reasons why platforms should be protected. However, I think we need to take a nuanced look at whether platforms should have the same speech freedoms as people when their decisions can significantly impact public discourse. We need to create a regulatory environment that balances freedom of expression with the need to protect users from harm, ensuring that trust and safety policies are both fair and transparent.

Stay tuned for Part 2 of our interview with Dylan…

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.