Request a Demo Blog

Emerging trends in online safety: key insights from WebPurify

December 29, 2023 | UGC

Imagine an army of artificially intelligent bots and human curators playing an endless game of 4D chess. Their mission? To sift through the storm of content you’re deluged with every day, pulling out shards of misinformation, disinformation, and other unsavory bits that affect your wellbeing.

This isn’t a scene from a cyberpunk novel. It’s the reality of living online in 2024 – a world where content moderation is no longer a backdrop but a protagonist in the ever-unfolding story of the internet. As the norms of what’s acceptable in online discourse get continuously redrawn, and as lawmakers, tech titans, and everyday users wrestle over the soul of the internet, one thing is clear: we’re in the midst of an unprecedented transformation.

What’s driving these changes? What hurdles do we face, and where does the solution – or host of them – lie? As a leader in this high-stakes, high-impact arena, WebPurify is not just asking these questions; we’re answering them.

Top trends in online safety

1. The rise of Artificial Intelligence and genAI

Generative AI has emerged as a vanguard technology that marries machine learning with the ability to create content, whether it’s text, images, or video. Its capabilities for content moderation are very promising, offering an arsenal of tools that can comprehend the context of user-generated content (UGC) on a scale that was once unattainable.

But, of course, this groundbreaking technology doesn’t come without its challenges. Pros like automated efficiency and large-scale content analysis are offset by cons such as the risk of false positives and the need for continual updates. That’s not to mention the road to its adoption is filled with hurdles, such as high implementation costs and its overall technical complexity, meaning teams will need to upskill. The decision to integrate genAI into content moderation workflows will take time – and investment – but its potential to revolutionize content moderation is boundless.

The hybrid moderation paradigm

Despite its promise, though, AI’s inherent limitations mean it will always be an incomplete solution to the growing challenge of content moderation, namely how to deal with sophisticated AI-engineered malicious content. A paradigm shift towards a hybrid moderation model, combining the computational prowess of AI with the discerning acumen of human moderators, presents the most failsafe approach. This blended model ensures that moderation teams can preserve the nuance of human language and intent.

WebPurify, being at the forefront of innovation, has already integrated this hybrid moderation model, which is better equipped to navigate the challenges of novel content types like generative AI.

The onset of net-new risks

With great power comes great responsibility, and AI is no exception. The very technologies that fortify our content moderation efforts can, in nefarious hands, morph into instruments of abuse and deceit. A case in point being the emergence of deepfakes and synthetic disinformation campaigns. These malevolent applications of AI mean content moderators must continually evolve their strategies to meet new threats. As we explained in our blog on the threats posed by generative AI, it’s imperative for content moderators to stay a leap ahead of the bad actors who are poised to misuse AI.

Top online safety trends

2. The drumbeat of global regulation

As new online risks emerge, governmental bodies around the world are stepping up to the challenge, with the European Union leading the charge, to usher in standardized online safety measures. The EU’s Digital Services Act (DSA), adopted in October 2022, aims to harmonize at the national level online platforms’ liability for harmful content. These new guidelines are primarily aimed at digital giants like Google, Amazon, and TikTok​.

We’ve also seen the UK’s Online Safety Act, crossing the parliamentary threshold in October 2023, which will require platforms, including encrypted messaging services, to moderate illegal content and protect children’s safety. The law also gives new powers to the UK’s media regulator, Ofcom, to issue hefty fines if platforms don’t meet a new duty of care to protect their users from harm.

What’s more, President Biden issued a US Executive Order for AI in October 2023, aimed at addressing evolving AI challenges such as standards for safety and security, reducing discriminatory biases in source models, and supporting workers displaced by this technology. This is the first, albeit an important, step made at the federal level to address the rise of sophisticated AI.

Landmark legal discussions

Cases such as Gonzalez v. Google, decided in May 2023, saw the Supreme Court grant Google’s motion to dismiss the claim based on Section 230 of the Communications Decency Act, and the US Court of Appeals for the Ninth Circuit affirmed. This judgment called Section 230 into question and sent ripples through the legal and digital communities.

Another landmark decision, NetChoice v. Moody and Paxton, is slated for argument in the Supreme Court’s 2023-2024 term. The dual cases will challenge Texas’ and Florida’s laws that pose First Amendment questions on content moderation and attempt to limit the ability of online platforms to remove user content or ‘de-platform’ users. These cases will no doubt give rise to a broader national discourse on the extent of regulation over social media platforms​.

The evolving dialogue

The interplay between various stakeholders is morphing into a more refined and solutions-driven dialogue. As a leader in content moderation services, WebPurify is not just a spectator but an active participant in these dialogues. By contributing its expertise and advocating for pragmatic, enforceable standards, WebPurify is at the forefront, championing a balanced approach that safeguards both users and platforms.

3. The deepfake dilemma and a maelstrom of misinformation

In a world where seeing is no longer believing, deepfakes have evolved from a benign novelty into a stark reality that casts a long, disconcerting shadow over all digital content. No longer confined to the fringes of online amusement, deepfakes have become a powerful tool for those intent on sowing the seeds of discord and misinformation. As important election cycles draw nearer in the US, UK and other nations around the world, there is a renewed urgency to identify and neutralize this type of content before its misuse escalates and poses a threat to the democratic process.

Deepfakes, though, are just one cog in the larger machinery of misinformation that churns without end. Our online life is fraught with many falsehoods, distorted narratives, and manufactured realities, leaving people more divided and less informed. That’s why it’s crucial for platforms to develop systems for identifying and neutralizing this content.

WebPurify’s counteroffensive

WebPurify is making concerted strides towards countering these forces that jeopardize the sanctity of online platforms. By making substantial investments in cutting-edge technologies and honing our robust methodologies, WebPurify is poised to confront the challenges posed by the torrent of misinformation.

4. Real-time reactions to global crises

As we outlined in our crisis response eBook, we live in an age where news doesn’t just break, but shatters across the global internet the moment it unfolds. The immediacy of information dissemination in today’s interconnected world again underscores the pivotal role of content moderation, especially when it comes to global crises.

The fighting in Ukraine and tensions in the Middle East – these aren’t merely geopolitical flashpoints, but places where information warfare is being waged with fervor on social media platforms. The battle to control the narrative unfolds at a blistering pace, and every post, every tweet, every video is one that could sway public opinion or incite big reactions. The rise of genAI and deepfake technology only stands to exacerbate unfolding crises like these.
The need for swift moderation

When dealing with real-time information exchange, it’s more important than ever that content moderation is both swift and precise. The ability to sift through huge amounts of user-generated content and discern the inflammatory from the informative, the malicious from the benign, is key to ensuring that users’ dialogue remains constructive rather than destructive. Any delay in moderation can be a potential catalyst for misinformation, panic, and hostility.

5. The real-world repercussions of UGC

In 2024, UGC extends far beyond the websites and apps people use on a daily basis and has seamlessly transitioned into the tangible world in a manner that poses real risks, in particular, to children and other susceptible groups. In a landmark acknowledgment of this issue, the US Surgeon General, in May 2023, issued a warning concerning the adverse effects that social media usage can have on the mental health of young individuals. The advisory highlights the urgency to address the potential harm stemming from negative online interactions, such as cyberbullying, and their impact on youth mental health. By acknowledging this, the Surgeon General has initiated a much-needed broader discourse on the ramifications of UGC, stirring both public and private sectors to take action.

Another distressing example of the real-world dangers of UGC is the rising incidence of teen sextortion. A disturbing report published in October 2023 highlights the perilous reality many teenage boys find themselves ensnared in. Perpetrators, often hiding behind the veil of anonymity that their digital devices provide, manipulate and blackmail these youngsters, leading to severe emotional, and in some cases, physical distress.

These are just a few examples that underscore the real-world harm that can stem from UGC. They depict a grim narrative where the boundary between the online and offline worlds is becoming increasingly blurred.

Only through collective efforts and a renewed commitment towards ensuring online safety can we hope to mitigate the threats posed by UGC, thereby creating a safer digital and physical space for all, particularly the young and vulnerable.

A multifaceted approach

Understanding the multidimensional impact of UGC, the solution won’t come solely from technology. It will involve forging alliances across a spectrum of stakeholders, encompassing health agencies, law enforcement, and educational institutions to foster a holistic approach towards content moderation. Alongside tech-centric solutions, we need a multifaceted approach that addresses not just the symptoms, but the root causes and the overarching societal impact of violative content.

As we progress into 2024, WebPurify is dedicated to remaining at the forefront of these changes. Our collaborations with industry stakeholders and the global community will continue to help shape a more secure, positive digital ecosystem.