Naked truths: what adult sites can teach big tech about online safety
March 21, 2025 | UGC“There is a lack of appetite for the non-adult hosting providers to engage with the adult platforms. It’s almost seen as distasteful,” reflects Emily Harman. She’s seen firsthand how the adult content sector remains shrouded in stigma – often removed from discussion forums, or coalition groups and passed-over in commentary.
But as platforms grapple with mounting regulatory pressures, should we be turning to the adult industry – long forced to confront these challenges head-on – for the key to more effective online safety measures?
“One of the biggest misconceptions is that the worst content – such as CSAM (Child Sexual Abuse Material) or non-consensual intimate images – lives predominantly on adult platforms,” Emily points out. “But research from NGOs and child protection organizations shows that a huge amount of this content is actually found on platforms that don’t allow adult content. That’s because those platforms often underestimate the risks. By contrast, adult platforms often acknowledge the inherent risks in their business models and build safety infrastructure to protect users accordingly.”
Before becoming a trusted advisor for tech platforms and civil society organizations, Emily served as the Trust & Safety Lead at OnlyFans, where she played a pivotal role in developing safety measures. Previously she worked as a criminal attorney specializing in serious sex offences, where she gained a firsthand insight into how offline harms migrate into the digital world.
To explore what measures the adult platform industry has made to keep its users safe, and what mainstream platforms can learn from it, WebPurify Head of Trust and Safety for EMEA, Ailís Daly, connected with Emily…
What drew you to a career in online Trust & Safety?
It felt like a really natural progression for me career wise, given my background in serious sex offences. So many people’s criminal experiences are happening online as there is such a blurring of individuals’ online and offline worlds now. Especially with children, who were the victims of a lot of the offences I worked on – their lives are increasingly lived on social platforms. I felt like I could make a difference if I worked to make those online experiences safer – rather than dealing with the fallout after an offence has been committed and the trauma has already been caused.
One of the things I’m really passionate about is preventing online sexual abuse – not just because of the lifelong trauma it causes to the survivors and their families but also because of the effects on the perpetrators. There is so much tech can do, alongside charities, NGOs and law enforcement, to stem some of the tide of online sexual offences.
Do you think adult content platforms are misunderstood?
There’s this misconception that adult platforms don’t care about trust and safety, but that couldn’t be further from the truth. They actually care about safety. No reputable company wants illegal content on their site – it’s bad for users, bad for business, and bad for reputation.
It frustrates me when people assume these companies are careless or reckless. The reality is, they often take safety more seriously than mainstream platforms because they have to. They want a safe space where people can explore their sexuality without exploitation or harm. And that’s a goal that benefits everyone, regardless of your views on adult content.
What do you feel are the biggest compliance challenges adult platforms face when moderating content?
I think adult platforms have a high-risk business model. That said, I think it’s also quite a simple moderation task because the risks are obvious: you host nudity and there could be non-consensual intimate images. There’s also obviously a risk of CSAM with any platform that allows user-generated content (UGC). You also have controlling and coercive relationship risks where a potential partner is being coerced into being an adult performer. Then you have sextortion and financial crimes.
Are there any easier aspects of trust and safety for adult content platforms?
On the one hand, it’s more simple for adult platforms, in that you have one user base: it’s going to be an 18-plus age gate. The complication comes when you have to walk the tightrope between business viability / commercial sensitivities, and the regulatory compliance measures, which are obviously becoming more robust as we see regulators such as eSafety and Ofcom get involved to regulate online safety legislation.
I still don’t think it’s as hard as a lot of platforms may say it is, but it’s certainly a friction that exists between commercial sensitivities and complying with the law.
How have evolving regulations such as the Online Safety Act or the Digital Services Act impacted content moderation, specifically on adult platforms?
They’ve been helpful in creating a roadmap for platforms in terms of what the expectations are, but also how specifically to implement those measures so that they comply with content moderation standards, such as the Ofcom approach to the illegal harms codes. It’s very clear what are recommended measures and on which risks and safety verticals those measures should be implemented.
They can also help platforms to overcome the friction that sometimes exists between safety and user privacy. Platforms can say measures are being implemented because of regulatory obligations within these regions, for example. That helps businesses explain the reasons for moderation measures to users, but also assists in providing a legal ground for data usage under privacy regulations, such as GDPR.
When you think of moderation providers – for example, a third-party moderation provider – how could they play a role in helping platforms meet compliance requirements? Do you see that as a helpful position to be in?
Absolutely. Not all platforms will have the tech wherewithal to build all their moderation systems. It’s really helpful for there to be a wide variety of third-party vendors for them to speak to. That means they can lean on their expertise but also the volume of solution providers means prices remain competitive and affordable for start ups.
With global regulations in flux – some tightening while others face pushback – what new compliance challenges do you see emerging specifically for adult content platforms?
The most recent argument around regulation is age gating. There are two arguments against that particular compliance measure. The first is business viability: if someone introduces too much friction on a platform, will their entire user base just disappear? The second is that ‘whack-a-mole’ effect: if a safety-conscious adult platform introduces too much friction and requires, for example, too much user data or identification, will that user depart into an unregulated space where there is no moderation?
What do you say to those challenging age gating on platforms?
On whether it’ll drive people to unregulated spaces, that’s an argument I agree with. I think there is a risk that users will stop using platforms like Pornhub and perhaps visit platforms in unregulated spaces or platforms that haven’t complied with the regulations. Potentially off the surface web, into the unregulated spaces of the dark web.
What I also say is that we’ve got to garden on our own turf, move the rocks on the beach in front of us. Every platform has just got to start at home and make their platform as safe as they can. Yes, we do need to deal with the unregulated spaces and we need to work with law enforcement and third party NGOs to try and ensure that there’s safety everywhere. But if you’re a platform which hosts content which children should not see, you need to start at home and make sure your platform is safe before worrying about the risks of harm elsewhere.
It’s something we talk about all the time: the disparate impact of well-meaning policies or well-meaning regulation. If it’s overworked, it can have pretty drastic consequences. It sounds like the advice even for those platforms that host adult content is getting ahead of regulation, but not going so far ahead that they end up folding their business, or being the cause of a drive towards more underground, less-safe platforms.
Yes, there will be scenarios where adult platforms and other social media companies will have to leave money on the table in favor of promoting safety measures and processes. I think a lot of platforms can absorb those costs.
What can platforms learn from adult content sites about age gating?
Platforms that are concerned should listen to the stories from more mature platforms about what happened to their user numbers when they introduced more friction.
OnlyFans, for example, spoke very publicly about the fact that when they introduced more age gating for their subscriber community, they noticed a temporary downturn, but then that did rise. It was a brief trough and then it peaked again. Hopefully platforms will have the confidence to maintain speed and stand the course.
Stories from other platforms, failures and successes, can be used to generate that investment in trust and safety, and safety by design measures. Notwithstanding an initial dip, the overall outcome was net positive.
Ofcom has actually been very generous in the list of age assurance and age verification measures that it has decided would be considered likely to be effective. The OnlyFans model, for example, of nine to 10 pieces of identifying data for creators, is not going to work for all platforms.
I can understand why a lot of platforms would say no, that’s too much friction. No one will want to give that amount of data. But that’s at one end of the spectrum in terms of the robustness of identity verification, and obviously there’s a reason for that given the nature of the platform.
But you also have email and phone number identification processes through the mobile network operators, so there are lots of measures that companies can use in terms of a third-party age gating vendor that don’t represent as much of a hurdle for users to get over in terms of feeling like their privacy is being infringed.
I want to touch on the nine points of verification used on OnlyFans. How do they balance regulatory obligations with user privacy and freedom of expression?
First off, I think it’s really important to clarify what kind of user privacy we’re talking about. There’s the legal right to privacy, and then there’s the more subjective sense of privacy – what people feel they’re entitled to. One of the things my colleagues and I talked about a lot when I was at OnlyFans is that there’s no basic human right to have an OnlyFans account. If a company asks someone to go through a few KYC (Know Your Customer) checks before signing up, that’s not infringing on their rights. Having an account isn’t a fundamental human right.
And when it comes to legal privacy, if a platform is required by law to put in an age gate, that immediately gives them a legal defense against any privacy complaints. You can’t claim your privacy is being violated just because a company is following the law.
That’s such a key point. People talk a lot about the balance between privacy and safety, but do you think that framing creates a false opposition?
Yes, I do. I don’t think privacy and safety are at odds at all – they’re completely intertwined. Protecting privacy actually helps protect safety, and vice versa. If you take children as an example, you want them to have private experiences online so they can explore and learn safely. But at the same time, you need safety measures in place so they’re not at risk of being targeted by bad actors.
And when people say, “Oh, we need to protect privacy,” we have to ask – at what cost? Are we saying privacy should trump the right of a child not to be abused online? Or that someone’s right to private communication should override a woman’s right to not have her intimate images shared without consent? That’s where the real balancing act comes in.
That’s a really powerful way to put it. So let’s talk about moderation. You’ve worked with both adult content and mainstream platforms – what do you see as the big differences in how those two types of content get moderated?
Honestly? I think they’re more similar than people realize.
If a platform allows UGC, there’s always a risk of something harmful slipping through. And that’s true whether you’re talking about adult material or something totally different like gaming content.
One of the biggest misconceptions is that the worst content – such as CSAM or non-consensual intimate images – lives on adult platforms.
Wait, really? That’s so counterintuitive. Why do you think that happens?
The platforms that permit it tend to be way more prepared for what to do when it appears. Mainstream platforms, on the other hand, are often caught off guard because they assume it won’t happen to them. That’s a dangerous mindset.
It sounds like mainstream platforms could actually learn a thing or two from how adult content platforms handle moderation. What are some of the biggest lessons they could take away?
Oh, 100%. First of all, they need to stop thinking this is just an “adult platform” problem. If your platform allows UGC in any form – whether that’s profile pictures, live streams, or private messages – you need to have strong moderation policies in place. This is why I don’t like describing adult platforms as platforms which host adult content – rather they’re platforms which ALLOW adult content. The reason for that – and it might sound like pedantry – is because all social media platforms host adult content: their policies may not permit it, but it is there.
One thing adult platforms do really well is rigorous identity verification. They don’t just let anyone upload content – they make sure people are who they say they are. Another big thing is proactive content moderation. Instead of waiting for reports to roll in, they use hash-matching technology to prevent known illegal content from ever being uploaded. Mainstream platforms should be doing the same thing.
That brings me to moderation tools. What’s actually working when it comes to preventing the spread of non-consensual content?
The most effective approach is a layered one. There’s no one magic tool that solves everything, so platforms need multiple safeguards.
One of the best tools out there is hash-matching technology, like what’s used for CSAM. For example, StopNCII.org, which is run by the Revenge Porn Helpline, lets victims create digital fingerprints (hashes) of their images. Platforms that integrate with StopNCII can block those images from ever being uploaded. That’s huge because once an intimate image is out there, the harm is done. And then going back to the layered approach, you also want to leverage the user community to create easy reporting. Plus you want to partner with law enforcement to ensure you’re keeping informed of emerging trends. And you want to ensure content moderation is human led, and AI enhanced, not the other way around.
It sounds like the StopNCII tool should be everywhere, but I don’t hear a lot about it. Why do you think more platforms haven’t adopted it?
I think a lot of platforms underestimate the risk of NCII on their platform, and think ‘oh we have other moderation in place’ or ‘it’s not considered a high risk on our platform because we don’t allow images in chats’. But I believe that any platform with an upload button should consider themselves at risk for both NCII and CSAM and hashmatching tech is such an easy piece of safety tech to onboard that it’s a no-brainer to me.
In terms of why the tool isn’t widely known, a lot of it is just lack of awareness. Victims don’t always know these tools exist.
I want to switch gears a bit. How can adult platforms build trust with users and actually encourage them to report harmful content?
First, they need to make their terms of service usable. Too often, policies are written for lawyers and regulators, not for actual users. People need to clearly understand what’s allowed and what isn’t.
Second, reporting buttons should be super easy to find. It sounds basic, but if someone doesn’t know how to report something – or worse, they don’t feel confident that reporting will lead to action – they just won’t bother.
On a related note, how do platforms handle those tricky cases where consent is unclear? Like, when there’s a report of non-consensual content but it’s not obvious?
My general rule? If consent is even slightly ambiguous, take the content down first and investigate later. It’s much safer to remove something temporarily than to risk leaving up non-consensual content.
That said, platforms do have to watch out for malicious reports. Sometimes, someone might submit a false report – maybe a disgruntled ex or a fan trying to get revenge on a creator – so there has to be a solid process in place to verify claims. But the first step should always be to take down the content while that happens.