The challenges platforms face in detecting and reporting CSAM, and the need for effective tools and policies
July 9, 2024 | UGCIn 2023, reports of child sexual abuse material (CSAM) found online and made to the National Center for Missing & Exploited Children’s (NCMEC) CyberTipline rose more than 12% compared with the previous year, surpassing 36.2 million. In the second part of our interview with digital safety advocate and child protection expert Caroline Humer, we ask how platforms can best manage monitoring CSAM and where they can go for advice and support.
Screening large amounts of User-Generated Content (UGC) is a challenge for any platform. Fortunately, with regards to CSAM, there are effective processes that can be replicated and organizations that can be approached for advice and guidance on how to establish a robust system. “There’s enough industry players who have a process in place, so why reinvent the wheel?” asks Caroline, who has worked for NCMEC as well as its sister organization, the International Centre for Missing & Exploited Children, and who now runs her own consultancy advising companies on how to protect their users. “We have industry standards through NCMEC, the Tech Coalition, WeProtect, INHOPE and others that allow us to help newcomers to this field. Don’t create something that you think works and then ask, ‘Is this correct?’”
Protecting your content moderators
For platforms using third party content moderators or their own in-house team to screen troubling content like CSAM, Caroline stresses the importance of ensuring your staff are supported and protected.
“Platforms need a mental health policy for the individuals who are doing this work on a day-to-day basis,” she explains. “Put a healthcare policy in place so your moderators know what they can ask for – whether that’s a psychologist once a quarter or an extra mental health day every month. That will show moderators and staff you are taking their physical and mental health seriously.”
The rise in AI-generated CSAM
One of the key trends identified by NCMEC in its 2023 report was a rise in the use of generative AI to create child sexual exploitation. “It’s a big problem,” agrees Caroline. “You’ve got 12-year-olds who know the technology better than you or I, and they’re creating fake nudes of their schoolmates and then sharing it. That’s considered CSAM in the US, so how do we make sure that kids who understand the technology better than we do know that what they are doing is illegal? We need to find ways to better understand AI, and put the structures in place that enable us to minimize the risks.”
- Read our summary of Thorn’s ‘Safety by Design’ whitepaper for more on the risks posed by AI-generated CSAM
- Read our VP of Trust & Safety’s explanation of AI-generated CSAM and what should be done to mitigate it
AI-generated imagery is also being used to exploit adults with the rise in deepfakes, but Caroline cautions viewing legislation as a silver bullet solution. “We don’t understand AI enough yet. Let’s first create the policies, frameworks, strategies, and implementation. If that isn’t sufficient, then let’s start regulating AI and dictating how you can or cannot use it. I’m cautious of too much legislation – legislation is black and white, and we live in a gray world.”
The challenge of introducing effective legislation is also exacerbated by the fact that technology is moving so fast that any laws will likely be out of date before they have been enacted. “We weren’t talking about AI-generated CSAM two years ago, but the EU’s Digital Services Act and the UK’s Online Safety Act were already starting to be written,” says Caroline. “They don’t mention AI-generated CSAM because it wasn’t part of the discussion. How do we ensure that legislation is flexible enough to be able to adapt to the changes that technology is bringing?”
There are several internet safety laws that are currently in front of the US government and, as regulations change, platforms might be encouraged to report even more instances of potential CSAM. Despite the risk of overwhelming NCMEC with false positives, Caroline advises that platforms err on the side of caution. “For the purpose of putting the child at the forefront of everything, we need to say, ‘let’s report.’ Platforms aren’t experts in identifying CSAM, so let the experts do that.”
As for the future, Caroline says that a lot remains unknown. “We don’t know what’s coming. We don’t know what the next challenge will be, the next technology, or where we will be in five years’ time. Will we be able to navigate deepfakes? Will we find a balance? We don’t always find the right tone from the start, so let’s understand that we can make mistakes – we’re allowed to make mistakes – but we also then have to rectify those mistakes.
“The other thing I think we need to do, and this is a tip for everyone, is to try not to be perfect. Solutions are never 100%, so let’s find solutions, whether they’re technological or not, that work, even if they’re imperfect. Because if they work 90% of the time, we can then put in the resources and find a solution for the remaining 10%.”
Useful resources
- NCMEC – largest and most influential child protection organization in the US.
- Tech Coalition – alliance of global tech firms working together to combat child sexual exploitation and abuse online. Provides exclusive resources, events and peer-to-peer mentoring to Members.
- WeProtect Global Alliance – brings together over 300 members from governments, the private sector, civil society and intergovernmental organizations to develop policies and solutions to protect children from sexual exploitation and abuse online.
- INHOPE – runs 54 hotlines in 50 countries around the world, including all EU member states, Russia, South Africa, North and South America, Asia, Australia and New Zealand.
Find out more: the role of collaboration in combating child exploitation online