We produce 4.6 billion pieces of online content every day, and while this proliferation of everything from product reviews and memes to selfies and livestreams makes for a more engaging, dynamic Internet, it also represents a formidable challenge where keeping users safe is concerned. Recent years have seen companies investing heavily in Trust and Safety, including both AI and Human moderation solutions. At the same time, governments the world over are taking note of UGC’s (user generated content) steady growth and legislating accordingly, ensuring online platforms observe at least some minimum standard for reviewing user uploads. It’s a start, and certainly the obvious bad stuff is being caught, but can these measures keep up with the pace of online innovation, and what’s to come next?
In this ebook, leading experts share their thoughts on:
- Human moderation, AI moderation, and combining the two.
- Future trends and technology impacting the industry.
- How businesses can future-proof their moderation.
- The challenge of and solutions for moderating the metaverse.
“People are always going to find ways to exploit technology,” says Alex Popken, WebPurify’s VP of Trust and Safety Operations, and former Twitter Head of Trust and Safety. “The future of content moderation for UGC is really about seeing what new technologies are emerging and understanding what known and unknown risks they pose. It’s a question of how we will evolve content moderation practices in line with these new technologies and the ways in which humans will exploit them.”
This ebook explores said technologies, and presents a case for how to be prepared.