Request a Demo Blog

Key moments in content moderation

April 16, 2024 | UGC

Today, content moderation acts like the “immune system” of the world’s online communications, filtering out harmful content to make the web a safe, more civil and trustworthy space for everyone.

Most apps and websites that feature user-generated content now have comprehensive rules that users must follow, automated systems to scan for breaches, and teams of human content moderators to review and remove problematic content.

But it hasn’t always been this way.

The early days of content moderation

When the early internet first began connecting people’s computers across the globe in the 1970s and early 1980s, it was mainly a tool of professionals such as academics, scientists and the military. In that context, it was broadly assumed that people would act responsibly online as a matter of professional ethics and decorum.

In the late 1980s and early 1990s, however, the internet spread beyond major institutions and became more accessible to the general public. The rise of communities via early online spaces such as bulletin boards and forums led to the emergence of system operators, known as “sysops,” who would remove spam, threats and other unwelcome content.

Then the early 2000s marked a new chapter, with the emergence of mass-market social media platforms like MySpace (2003) and Facebook (2004). Along with these social networks came the rise of user-generated content (UGC). Suddenly people had the power to publish anything, and as you can imagine this created many headaches.

The need for automation

The surge of user-generated content these new platforms encouraged meant it was impossible for every post to be personally inspected. On one hand, this gave visitors a thrilling sense of freedom, with the feeling that they could post anything they wanted. On the other, cyberbullying, hate speech, and inappropriate content targeting minors sparked public outrage and forced MySpace and Facebook to ramp up their content moderation efforts.

Consequently, this period saw the development of early content moderation teams and community guidelines, although these were often criticized for being reactive and inconsistent.

The rise of YouTube in 2005 presented a new challenge: moderating vast amounts of video content. Again, YouTube could not possibly inspect each clip individually, so instead it implemented automated systems alongside human moderators to flag and remove copyrighted material, hate speech and violent content. This period saw the development of automated filtering tools and an ongoing debate about balancing free speech with content removal.

Tackling propaganda and fake news

The 2010s witnessed a significant escalation in the challenges and complexities of content moderation. Key moments during this decade included the Arab Spring uprisings of 2011-12, where social media played a crucial role in mobilizing protestors but also raised concerns about government censorship and content manipulation.

The emergence of ISIS and its use of social media in 2013-2014 for propaganda and recruitment posed another major challenge, leading to increased use of automated detection tools and human moderators trained to identify terrorist content.

The Gamergate controversy of 2014-2015, a harassment campaign targeting women in the gaming industry, exposed the dark side of online communities. Platforms were criticized for inadequate responses to online abuse and the need for better harassment reporting and enforcement mechanisms. This event highlighted the importance of content moderation to address online harassment and create safer online spaces.

The 2016 US election and the Brexit referendum in the UK fueled concerns about the spread of “fake news” and disinformation campaigns. Baseless stories such as ‘Pope Francis has endorsed Trump’ were widely circulated, and often created for profit rather than political reasons. For instance in Macedonia, resourceful teenagers established a business creating fake news stories about American political figures. These tall tales would then go viral among partisans in the US, allowing the youngsters to earn a hefty advertising revenue.

In the same year, a conspiracy theory dubbed ‘Pizzagate’ falsely claimed that Democrats were running a child sex trafficking ring out of a Washington D.C. pizzeria. Ultimately, it incited a man to open fire inside the restaurant with an assault rifle, although no one was injured. However, Pizzagate was significant because it exemplified how failure to effectively moderate misinformation can lead to real-world violence and underscored the need for better content moderation to limit the spread of incendiary conspiracies.

In 2017, the relationship between violence and social media hit the headlines again, but for a different reason. A mass shooting on a mosque in Christchurch, New Zealand, was live-streamed on Facebook. This shocking incident highlighted the challenges of preventing the spread of violent content online and sparked debates about real-time content moderation and improved reporting mechanisms. While platforms were aware of the need for robust content moderation and trust and safety strategies in 2017, what this unfortunate incident also highlighted was the need to have a crisis response strategy for when shocking events unfold in real-time.

Misinformation and deep-fakes

The 2020s have continued to present new challenges for online platforms and content moderators alike. The COVID-19 pandemic highlighted the dangers of misinformation spreading rapidly online, contributing to vaccine hesitancy and public health challenges. The attack on the US Capitol on January 6th, 2021, raised fresh concerns about the role of social media in facilitating the spread of extremist content and fueling real-world political instability.

Meanwhile, the ongoing wars in Gaza and Ukraine present a complex challenge, requiring platforms to balance the need to remove pro-war propaganda and disinformation with allowing documentation of the conflict and supporting journalists. Several countries have proposed or enacted new regulations aimed at holding platforms more accountable for content moderation, addressing issues like hate speech, misinformation and algorithmic bias.

Most recently, the rise of AI has presented further challenges. One example was the widespread sharing of AI-generated explicit images of Taylor Swift earlier this year, despite violating platform rules. This raised grave concerns about the ease with which generative AI can be used to create damaging deepfakes, further complicating the content moderation landscape. WebPurify has adapted to these challenges by developing a synthetic media model that can detect images created by generative AI. It’s one of the many tools we’re using to stay ahead in the cat-and-mouse race against AI.

Politicians and world leaders are also now starting to pay attention to the threat. In October 2023, President Biden issued an Executive Order outlining new standards for AI safety and security, ensuring responsible use of the technology while protecting individuals’ privacy.

In Conclusion

In a relatively short amount of time, content moderation has evolved from a straightforward task of managing online communities to a complex and multifaceted endeavor that requires a delicate balance between protecting free speech, maintaining user safety and upholding ethical standards.

The debates surrounding content moderation have become increasingly nuanced, encompassing issues of transparency, accountability, algorithmic bias, and the role of automated systems versus human moderators.

At WebPurify, we’ve long been at the forefront of this evolving landscape, developing and refining content moderation solutions for nearly two decades.

With our expertise spanning various industries, and working with one in seven Fortune 100 companies, we’ve tackled a wide range of challenges using our advanced technology, including our image model which can specifically detect synthetic media and AI-generated images.

As the digital landscape continues to evolve, it is clear that content moderation will remain a critical issue that requires ongoing discussion, collaboration, and innovation. Our longevity and experience positions us as a valuable partner for organizations grappling with the complexities of content moderation.