Request a Demo Blog

“Content moderation is creating a safer, more inclusive online world”: Meet Alex Popken, WebPurify’s VP Of Trust & Safety

March 23, 2023 | Announcements

“You’re in it because you understand the importance of this work,” says Alexandra Popken, who joined WebPurify as its first VP of Trust and Safety – from a position as Head of Trust and Safety Operations at Twitter – with a mission to help protect digital communities at scale, and at the forefront of content moderation solutions. “This work is not for the faint of heart. When you’re a guardian of the internet, it’s challenging because you can be exposed to the worst of humanity. You have to remember how your work is positively impacting society.” 

After joining Twitter in 2013, Alex’s role as a content moderator was focused on the microblogging platform’s growing ads business. She’s been on the front line, reviewing hundreds of thousands of user-generated text comments, images and videos, helping to define and distinguish what was and wasn’t acceptable. Going on to build an operations team that was focused on ad quality and brand safety, Alex’s last post at the company was leading its large trust and safety operations organization, determining how its platform rules would be enforced at scale.

As an industry thought leader, Alex now joins the WebPurify team with deep knowledge of the challenges brands face from user-generated content – challenges that are constantly evolving. But she also brings the unique perspective of knowing the other side of the content moderation process, having been responsible for overseeing an internal trust and safety team that protected a leading tech company’s brand integrity and the safety of millions of daily users. 

Here, Alex shares her insights into content moderation best practices, and why it’s more important than ever for brands to prioritize it. From scaling safety strategies in a fast-growing platform to auditing performance, Alex shares the lessons she’s learned throughout her career. 

Tell us, why WebPurify?

It was important for me to join a company that is working hard to create a net positive impact in the world. At WebPurify, we work with companies to keep their communities safe by offering industry-leading AI and human moderation services – and by extension, we keep these companies’ brand reputations safe. These aren’t just social media platforms – WebPurify helps a diverse set of clients, from e-commerce to dating to gaming, deal with the risks of user-generated content.

Also, as a former Twitter leader responsible for scaling the enforcement of the platform’s rules, we were really reliant upon scalable and effective moderation solutions, and these are solutions that WebPurify offers. For me, it was also exciting to be on the other side, operating as a thought partner for companies seeking to bolster their content moderation measures.

What do you think makes WebPurify stand out in the marketplace? 

Firstly, we’re industry veterans. We’ve been around for 16 years, so we’ve seen the evolution of user-generated content and technologies that pose new risks. We’ve been able to mature and adapt our offerings to keep them on the cutting edge of what’s happening. So, for example, we moderate the metaverse. Second, we’re nimble and client-focused. We don’t just offer AI moderation or just human moderation – we really tailor our services to the needs of our clients, which is unique in this industry. 

Lastly, moderator wellness is baked into our DNA. Our moderators are full-time employees with robust benefits in-office, and their career satisfaction, ongoing growth and mental health are top priorities of ours.

Are there particular areas of content moderation that you feel most passionate about?

When we talk about content moderation, it’s sort of a nebulous concept for a lot of people. But at the end of the day, it’s really about human safety. Are we keeping the users of the internet safe? And that includes children, right? Vulnerable populations who may be exposed to exploitative behaviors or content. It’s really important that we think about humans and the ways in which they’re interacting online and put in place safeguards to protect vulnerable populations first and foremost. I feel really strongly about this.

What do you think your experiences at Twitter offer WebPurify’s clients?

Twitter is really the modern-day zeitgeist, and we were dealing with unique harms across our platform, at a massive scale, for which we needed to leverage various effective content moderation solutions. I have first-hand experience mitigating myriad harms that user-generated content presents on the world stage that Twitter represents. I can bring those learnings to companies from all industries facing similar but also unique challenges. 

I was also responsible for implementation at scale. So, not only can I help clients grapple with which community principles and guidelines to adopt, I have deep experience as an operator who leveraged technology and people to get these protections in place. I pride myself on being a doer.

What are the most important lessons you have learned in your time as a Head of Trust & Safety Operations?

That it’s a never-ending pursuit. It’s a job in which you are constantly needing to assess risks – risks that haven’t even materialized yet. As we see new technologies come on the market – for example, sophisticated chatbots or generative AI – it’s really up to the companies creating and integrating these technologies into their platforms to think about the ways in which they can be misused. If you are not doing this, it will impact trust with your user base, your company’s reputation, and ultimately your bottom line. 

Another lesson is that you are only as effective as the tools at your disposal. Even if you identify a harm, it can be really difficult to remove it at scale. That’s why companies rely upon content moderation services like WebPurify to help them achieve a level of moderation that they would never be able to achieve in-house.

Alex Popken, VP of Trust & Safety at WebPurify

At Twitter, you worked with AI and human moderation teams. How important is it to have this hybrid approach to ensure user safety and brand reputation?

They need to be viewed as complementary. Machines allow for scale and the removal of the most egregious content, but people provide the second layer and can understand nuance, context, and complicated grey areas. Typically when we work with clients, we are offering them a combination of the two. We believe a hybrid approach is the most effective solution.

I’ll give you a real example. On a client call last week, my colleague used a figure of speech: “There’s more than one way to skin a cat.” A machine probably wouldn’t be able to pick up on the fact that this isn’t literal, and that’s where human decision-making comes into play. 

WebPurify is known for its meticulous quality control. When you were Twitter’s Head of Trust and Safety Operations, what methods did you use to evaluate and monitor the performance of content moderators?

Quality control is critical. It’s extremely important that rules are applied consistently and that users understand what these rules are. They need to see fairness applied across a platform because incorrect or inconsistent decisions can quickly undermine a company’s credibility. 

When assessing the accuracy of machine and human decisions, you are looking at false positives and false negatives. False positives are instances in which the machine or person was too restrictive and unnecessarily halted good content, and false negatives are instances in which the machine or person wasn’t restrictive enough and let violative content through. Regular sampling should be conducted to assess accuracy.

It’s also critical that you have objective, detailed guidelines, and that you’re assessing errors against those guidelines. When trends are noticed, it may warrant updating or refining moderation policies and processes.

How have you maintained the balance of implementing safety strategies while preserving the user experience?

I don’t think the two are mutually exclusive. In order to preserve a positive user experience, you do need some level of moderation. If a user is in an online space and subject to harassment and hateful content, it’s not a place they are going to want to be. It’s critical to have practices in place to remove that content and preserve a positive experience. The reality is that moderation is creating a safer, more vibrant, and inclusive online environment for users. 

What have been the biggest changes in UGC and user safety throughout your career?

People are people, and they are always going to find ways to exploit technology. We’re constantly seeing new technology emerge that pose both known and unknown risks. We’re always trying to evolve our content moderation tactics in line with these new technologies and risks. 

Another big shift is in regulation. Ten years ago people weren’t really aware of the harms of UGC; it wasn’t widely talked about. Today, we’re seeing calls to widen the regulation of online spaces. There needs to be an ongoing conversation and working partnership between online platforms, government, civil society and academia to align upon appropriate safeguards and transparency measures. When these groups are working together and not against each other, good will come from that. 

What does adapting strategies to meet changing user safety needs look like in practice?

When I joined Twitter in 2013, our enforcement tools were very limited – we had blunt instruments. But over time, our tools became more sophisticated and we were able to be much more surgical in finding harms online and removing them, or even proactively mitigating them. Being able to shift from a reactive approach to a proactive one was really important. Content moderation is at its best when it’s preventative. 

A critical part of that process of shifting to a proactive approach was using vendors to help scale our human moderation team. Working with companies like WebPurify allowed us to really up-level our ability to moderate content at scale.

What advice would you give to someone who is either starting out in a similar role or has a platform they are getting off the ground?

It’s important to make sure that you are baking trust and safety practices into your company from day one. Don’t assume you’re going to be lucky and avoid the risks of UGC. Inevitably, there is going to be a crisis, and if you don’t have the appropriate safeguards in place to moderate user-generated content, you’re going to fall behind. 

If you are new to this area and you’re not quite sure how to moderate content, make sure that you are consulting with experts in this domain who can help guide you towards the policies, processes and technology you need to ensure that you are maintaining a safe online space.

What are some of the lingering doubts or questions that startup firms might have about bringing in a content moderation team, and what would you say to them?

I think there is sometimes a perception that when you bring on a third-party content moderation team, they’re not going to produce high-quality work. The reality is that their work is oftentimes higher quality than that which you can achieve in-house because this is their bread and butter. They also have quality assurance measurement practices in place to ensure accuracy from day one. Content moderation vendors should be viewed as thought partners and an extension of your team.

What has been the most rewarding part of your career path?

What’s really rewarding is when you can see the real-world impacts of the work that you’re doing. For example, with CSAM (child sexual abuse material) it is extremely difficult content to moderate, but at WebPurify we’ve been able to put hundreds of child predators behind bars by detecting this content and reporting it to authorities. When you see the real-world impacts of the work that we’re doing, and the ways in which our moderation is protecting children and vulnerable populations, it is immensely gratifying.