Request a Demo Blog

AI-Generated Content: should platforms do more to protect users?

November 22, 2023 | UGC

As AI-generated content continues to take the online world by storm, many users are calling for platforms to put greater measures into place to detect and remove harmful manipulated media before it reaches the user.

From labeling content with watermarks to creating high-tech detection tools, many platforms, such as TikTok and Meta, are already taking steps to tackle synthetic and manipulated media. However, with privacy issues and technological limitations standing in the way, the question of identifying AI-generated content may not have a straightforward answer.

In part 3 of their conversation, Sam Gregory, Executive Director at human rights organization WITNESS, speaks to WebPurify’s VP of Trust & Safety, Alexandra Popken, on what’s currently being trialed to manage AI-generated content, before sharing his advice on what platforms, AI developers and users can all do to mitigate the potentially harmful consequences of this technology.

AI-Generated Content: should platforms do more to protect users?

Large platforms, such as social media platforms, are trying to grapple with how generative AI is going to create more deepfakes and synthetic or manipulated media on their platforms. What would be your advice to the policy and enforcement teams on handling this new wave of AI?

Sam Gregory: The first thing is not to de-historicize or decontextualize this. The most adverse impacts of this will be felt by the people who already face disparate harms. Marginalized, vulnerable people face the most harm from new technologies just as they face it with existing technologies.

New technology doesn’t change the way that harm is experienced, nor does it change that expertise, and how to prioritize response needs to come from those communities.

The first thing to consider is: how do we ground response in communities that have frontline exposure and have been disparately impacted? Secondly, as a general principle, human rights impact assessment is a really strong way to do this. It has a robust methodology that many teams already understand that can productively shape the development and implementation of technologies and policies.

I think we’re at a real cusp moment where the voices of people in platform policy teams are really critical to make the right decisions about, for example, how we do watermarking or labeling in a way that really protects privacy, but also provides signals to a platform and users. It’s a cusp moment to get that right for their users and much more broadly for society as we move into voluntary commitments and eventually regulation.

Are there any concrete steps platforms need to consider?

Sam Gregory: There are concrete steps that platforms need to consider in terms of how they think about their policies. It’s really important to communicate to the public about what is possible and what’s not, and not to overly raise expectations of detection or clues. Platforms have a responsibility to really help their users and that includes supporting media literacy.

It’s also about helping users if they choose to label manipulated media themselves. For example, TikTok just launched a way you can do that within the platform. I know other platforms, such as Meta, are also experimenting with this and will probably launch something.

But it’s important not to lean into the idea that we’re going to be able to simply say yes or no for AI content. I think that’s a false promise to the user right now even if it would be nice to make that claim.

We need to design authenticity and provenance tools from a global perspective and from a human rights perspective. It’s going to be helpful to know how something is made, whether AI-generated or real-life media, but we don’t need to know who made it as a necessary part of that process. We need information that can help a user make discerning choices, but we don’t want to compromise fundamental civil and human rights such as privacy in doing that.

On detection, I think one of the frustrations over the years past has been that detection gets much less resourcing than synthesis in the creation and always will. We need investment in the detection side, and I think investment also needs to not just be centered on the actual platforms but making detection available more broadly.

It’s a real role for platforms and tech companies to support civil society and journalists with this access because they are on the front lines. A lot of this is going to be, as we’ve seen already, fact-checkers and others who must really grapple with the realities they face and the claims and stories they have to debunk or verify. So, it’s not just internal to the policy teams or platforms: it’s about getting great detection tools that work well and equitably but making sure you are supporting and resourcing the people in the world who are going to have to manage the brunt of this.

Going back to the origin of this content, with the AI developers, what should they be doing to build these responsibly?

Sam Gregory: There’s a whole suite of ways in which we need accountability and responsibility from the developers here. For example: how are they transparent? How are they red-teamed in a way that’s accountable right from the start? How are they mitigating harms and risks that are identified and that are already well-known from the beginning? How are they doing human rights impact assessments before, during and after? So, there’s a whole suite of activities that are being articulated increasingly in soft norms. For example, in the White House voluntary commitments.

I think there are some good solid starting points around synthetic media in the Partnership on AI’s framework for responsible practices on synthetic media, which WITNESS is part of, but also Google, Meta, Microsoft, TikTok, Bumble, the BBC, Truepic and Synthesia and others. These sets of principles focus on transparency and disclosure, particularly with synthetic media as well as identifying core questions around consent.

When we look at particular techniques, we can start to get even more granular. For example, if we’re looking at really establishing the authenticity and provenance of large language model content, we might need to go back to radioactive data within the training data. We might need to go all the way back to the foundation model to have a really robust way of knowing, further back in the pipeline of someone creating an image in an app they use on their phone.

The accountability goes further back in the chain for how we make sure we can understand AI origins than we’ve got used to going. We don’t really talk about camera manufacturers or film manufacturers as part of how we understand responsibility in image generation, but with AI we have to go further back in the pipeline. They need to be part of the process of providing disclosure as well as supporting a robust framework that’s focused on transparency, on disparate harms, and on our human rights impact assessment. That’s what those AI developers need to be doing now.

Finally, are there any companies people should keep an eye on that are ahead of the curve in tackling this? I know you mentioned TikTok, from a platform perspective.

Sam Gregory: TikTok has been interesting. I think it’s partly because they already work in an AI media native format. They have a very visible way in which a piece of media is created, and it’s labeled to say it was made with this filter, or how this effect was stitched in this way, and how they used different audio.

TikTok has also been fairly proactive in trying to lay out clearer policies around what they allow and providing ways for people to label.

The companies that have joined the Partnership on AI’s Framework for Responsible Practices represent the cross-industry and cross-stakeholder thinking you need in this. From Bumble, which is obviously focused on interpersonal relationships, and Synthesia, which works like a business software providing a new way we might communicate. And then, of course, the tech giants who are trying to think from a creative and commercial point of view, such as Microsoft or Adobe.

Also, WITNESS is saying: We need to really think about consent and disclosure, but how is that going to play out in a global context where people are using this for creativity and harm, with disparate impacts on the most vulnerable and most critical voices in our information ecosystems? Responsible practices are a good example of how you can find shared steps to take, even now before regulation happens, that are pulling people in the right direction and inviting reflection on where we need to have regulation, which needs to be on really important areas such as transparency on harms, consent and disclosure of AI process.

Learn more about the work WITNESS is doing within the human rights and technology sectors at witness.org and gen-ai.witness.org.