Request a Demo Blog

Trust & Safety leaders to watch and follow

November 11, 2024 | Uncategorized

It’s a relatively new industry, so it’s not surprising that leaders in the Trust & Safety industry come from diverse backgrounds – child psychology, human rights, data ethics, cybersecurity. But they all share a common goal: to protect online communities and make the internet a safer place. These experts are shaping the future of online safety, addressing challenges that range from combating misinformation and preventing harassment to safeguarding vulnerable groups and ensuring data privacy.

Over the past year, we’ve interviewed, profiled and collaborated with many Trust & Safety leaders to share their unique perspectives, wealth of experience, and specialist knowledge.

But these individuals are not just experts – they’re advocates. They are deeply invested in the social, ethical, and human dimensions of trust and safety, recognizing that the work they do touches on some of the most pressing issues of our time: privacy, freedom of expression, child protection, and the right to information.

Here, we’re spotlighting some of the leaders and influencers in the Trust & Safety space who’ve shared their knowledge and insights with us. If you’d like to join our panel of experts, get in touch!

Caroline Humer on the challenges platforms face in detecting and reporting CSAM, and the need for effective tools and policies

1. Caroline Humer

Caroline Humer had a long career working for the National Center for Missing & Exploited Children (NCMEC) and its sister organization, the International Centre for Missing & Exploited Children (ICMEC), before setting up her own consultancy in 2021 advising companies on how to build frameworks or mechanisms to protect vulnerable people, both off and online.

“The sheer volume of CSAM is overwhelming, making it crucial for technology companies to work together to develop more sophisticated detection tools.”

Why follow Caroline?

Caroline is one of the world’s leading experts in the detection and reporting of child sexual abuse material (CSAM). She is an advocate of continuous innovation in detection methods and the importance of industry collaboration to stay ahead of offenders who constantly find new ways to exploit technology.

Caroline wants to see a multi-faceted approach that combines advanced algorithms, human expertise, and robust reporting mechanisms to effectively combat the proliferation of CSAM online. She argues that no single entity can tackle this issue, and believes that partnerships are key. “We need a unified approach where law enforcement, NGOs, and tech companies join forces to protect vulnerable children.”

She also points out that effective collaboration can lead to more comprehensive strategies and improved outcomes in the fight against child exploitation, and stresses the importance of sharing best practices and resources, as well as the need for ongoing training and support for those working on the front line.

E-commerce safety: how brands and platforms can protect children and young shoppers

2. Vaishnavi J

Vaishnavi J has over 15 years’ experience developing, scaling, and enforcing youth safety at leading technology companies. As Head of Youth Policy at Meta, she oversaw the company’s approach to age-appropriate experiences across all its products, including Reality Labs, Instagram, Facebook, Messenger, and WhatsApp. She also led video policies at Twitter, was Twitter’s first Head of Safety in the Asia-Pacific, and led child safety policy in the region for Google’s central team.

Now a policy advisor at VYS, the firm she founded, Vaishnavi leads a team that supports companies, governments, and civil society in responsibly designing products to support the safety and wellbeing of children.

“Ensuring the safety of young shoppers is not just about technology, but also about creating a culture of vigilance and care.”

Why do we follow Vaishnavi?

Vaishnavi is well-versed in the risks young consumers face, such as exposure to inappropriate content, privacy breaches, and fraudulent activities.

Her strategies for enhancing the safety of e-commerce include encouraging platforms to introduce robust safety measures. “E-commerce platforms must implement stringent age verification processes and ensure that their algorithms do not inadvertently expose young shoppers to harmful content,” she asserts.

Vaishnavi also advocates for increased parental involvement and education to help guide young consumers, and highlights the importance of advanced tech and vigilant monitoring. “A collaborative effort between parents, e-commerce companies, and regulatory bodies is essential to create a secure shopping environment for our youth,” she says.

From screens to statutes: Dylan Moses discusses his views on the evolution of content moderation

3. Dylan Moses

Dylan Moses is a veteran Trust and Safety professional with hands-on experience at major tech companies, a Founding Fellow at the Integrity Institute, and a leading voice in distinguishing content moderation from censorship. Currently finishing his JD at Harvard Law, where he also serves as a fellow at the Berkman Klein Center for Internet and Society, his work revolves around the evolution of content moderation and its critical role in maintaining safe online spaces.

“Content moderation is about maintaining a safe and respectful online environment, not about stifling free expression.”

Why do we follow Dylan?

Dylan is keen to allay the common misconception that content moderation and censorship are synonymous, clarifying that content moderation is about creating safe and respectful online environments by enforcing community guidelines, whereas censorship involves suppressing speech and controlling information.

“Content moderation aims to maintain a balance where users can express themselves freely while ensuring that harmful or inappropriate content is removed,” he explains.

He emphasizes the necessity for transparent and fair moderation policies that respect users’ rights while protecting the community from harmful content, and advocates for a nuanced understanding of these terms to foster healthier discussions about online safety and freedom of expression.

Dylan is also interested in the development of moderation practices. “The evolution of content moderation has been marked by the increasing use of AI, which helps identify and remove harmful content more efficiently,” he notes.

However, he also stresses the need for human oversight to ensure contextual understanding and fairness, and the importance of adapting moderation strategies to keep pace with the changing landscape of online content and user behaviors.

Brand safety and suitability: how content moderation can save an advertiser's reputation

4. AJ Brown

AJ Brown is COO at the Brand Safety Institute, former Head of Brand Safety and Ad Quality at Twitter, and a specialist in helping brands navigate the digital landscape.

“In today’s digital world, brand safety is paramount. Effective content moderation is key to protecting a brand’s reputation.”

Why do we follow AJ?

According to AJ, content moderation is critical for ensuring brand safety and protecting an advertiser’s reputation in today’s digital landscape. “Brands risk significant reputational damage when their ads are placed next to inappropriate or harmful content,” he states.

AJ believes various techniques and technologies can be employed to achieve effective content moderation, including AI-driven tools and human review processes. “Combining advanced AI with human oversight allows for more accurate and context-sensitive moderation,” he explains.

He also stresses the importance of ongoing monitoring and adaptation to new challenges in the digital advertising space.

6 challenges of countering misinformation on social media

5. James Alexander

James Alexander, former Global Head of Illegal Content & Media Operations at Twitter, is a specialist in combating misinformation, with extensive experience in addressing disinformation campaigns on social media.

“It’s really important not to get swept up by the fascinating or the interesting, but not necessarily very likely. People don’t usually make really complicated or complex misinformation. It exists and, of course, it can be very damaging. We’ve seen audio be a big problem for that recently and we don’t have great ways to counter audio abuse. But where possible, people are going to take that real thing and adjust it or they’re going to take that real thing and just lie about it. They’re going to take the easy option because they can do that 50 times and get a lot more value out of those 50 posts than they do using the complex option one time, which you figure out and take down.”

Why do we follow James?

James advocates for a holistic approach to the fight against misinformation, emphasizing that it’s not just about removing harmful content, but empowering users through education, and fostering a culture of truth and accountability. He believes this can only be achieved through collaboration between social media platforms, governments, and civil society to develop comprehensive policies that promote digital literacy and responsible information sharing.

He argues that technological solutions have a role to play in detecting misinformation early, and underscores the value of human oversight. Ultimately, though, James believes that empowering users to discern between reliable and unreliable sources is essential for building long-term resilience against false narratives. “Our fight against misinformation requires both technological innovation and human judgment to ensure that misinformation doesn’t have a chance to take root,” he says.

The Future of Generative AI: implications, human rights and detection tools

6. Sam Gregory

Sam Gregory is Program Director at WITNESS, a global organization that helps people use technology to protect and defend human rights. Sam focuses on the intersection between technology, human rights, and trust & safety, particularly in mitigating the risks posed by AI-generated content and deepfakes.

“The future of AI-driven content moderation lies not just in detection but in empowering communities to understand and challenge misinformation.”

Why do we follow Sam?

Sam believes platforms have a duty to protect their users from AI-generated content and is interested in the implications generative AI might have on human rights. His work addresses the pipeline of shared responsibilities between AI developers, platforms and users in detecting and mitigating the risks of the latest AI technologies.

Sam is also interested in the balance between technological advancements and ethical considerations, noting that generative AI presents risks that can’t be managed by technology alone. He advocates for a rights-based approach, focusing on empowering users and building resilience against misinformation through education and community collaboration. “AI must be designed and deployed in ways that respect and protect human rights – without this, we risk turning powerful tools into instruments of harm,” he states.

7. Dr. Adam Pletter

Dr. Adam Pletter is a licensed clinical psychologist who specializes in the treatment of children, adolescents, and young adults from his office in Bethesda, Maryland. He also founded iParent101, a program aimed at helping parents and children safely navigate the digital world.

“The key to internet safety for children isn’t restricting their access, but educating and empowering them to make smart choices online.”

Why do we follow Dr. Pletter?

Dr. Pletter is a passionate advocate of internet safety for children and has dedicated much of his career to helping families navigate the complex online landscape, offering practical advice on issues ranging from cyberbullying to creating safe online environments for kids.

He brings a crucial child-centric perspective to the Trust & Safety space and urges parents to foster trust so that kids feel comfortable discussing their online experiences.

In his work on cyberbullying, he discusses the emotional and psychological impact of online harassment on young users and highlights the responsibility of both parents and platforms to provide a supportive environment. He believes that awareness and resilience-building are essential for helping children to handle the challenges of digital interaction.

8. Dr. Yi Liu

Dr. Yi Liu is a marketing strategist at the Wisconsin School of Business who provides valuable insights into the marketing aspects of content moderation. His expertise helps brands understand how content moderation can support marketing strategies and improve brand safety.

“Content moderation isn’t just a compliance issue; it’s a strategic marketing imperative that enhances trust and brand loyalty.”

Why do we follow Dr. Liu?

Dr. Liu argues that effective moderation is integral to building brand trust and maintaining a positive user experience. “Content moderation isn’t just a compliance issue; it’s a strategic marketing imperative that enhances trust and brand loyalty.”

Dr. Liu also highlights the risks brands face if they ignore the importance of moderation, from user backlash to reputational damage. He encourages businesses to view content moderation as a core aspect of their marketing and user retention strategies, rather than a secondary concern, and encourages companies to take a proactive approach, aligning moderation policies with brand values to foster a culture of safety and integrity.

9. Dr. Alexa Koenig

Dr. Alexa Koenig is Co-Faculty Director at the Human Rights Center, UC Berkeley School of Law, and co-founded the Human Rights Center Investigations Lab. Her focus is on the intersection of technology, law, and human rights. Among her many research areas, Alexa has explored and written about the impact of distressing content on the individuals who moderate it.

“Ensuring the wellbeing of content moderators is not just an ethical obligation but also essential for maintaining effective moderation practices.”

Why do we follow Alexa?

Alexa brings an essential human rights perspective to the content moderation conversation. She advocates for the safety and wellbeing of content moderators, emphasizing that they are the unsung heroes of the digital world, often exposed to distressing content without sufficient support.

Alexa highlights the importance of providing mental health resources and fostering a supportive environment for moderators to mitigate the negative psychological effects of their work. “The mental health of content moderators is often overlooked, yet they are the ones on the front lines protecting our digital spaces from harmful content,” she points out.

Alexa stresses that platforms must balance removing harmful content with respecting freedom of expression, and calls for a rights-based approach to moderation that upholds the dignity of all users while ensuring safety. She also wants to see increased transparency in moderation decisions and better training that equips moderators with the right skills for this challenging work.

Want to join our panel of experts? Get in touch! In addition, we recommend you follow our VP of Trust & Safety Alexandra Popken, and our Head of Trust & Safety, EMEA, Ailís Daly.

 

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.