Request a Demo Blog

The Future of Generative AI: implications, human rights and detection tools

November 8, 2023 | UGC

Every day, millions of new AI-generated images, videos and audio files are created and shared online. While some may seem harmless – like fun TikTok filters or AI artist trends – other types of manipulated media, like deepfakes, can have devastating implications.

In a world where we’re constantly bombarded with breaking news stories as we open our social media apps and websites, this new wave of technology is also leading many users to question whether the content they are seeing is real, manipulated or entirely AI-generated.

In Part 1 of a three-part series, Sam Gregory, Executive Director at human rights organization WITNESS, speaks with WebPurify’s VP of Trust & Safety, Alexandra Popken, about the future of generative AI – from the work WITNESS is doing to support people on the front lines of truth to how this area of technology intersects with a range of human rights issues.

The Future of Generative AI: implications, human rights and detection tools

You’re doing some incredible work at the forefront of generative AI and the implications and potential of that technology. Tell us a little more about what you do at WITNESS.

Sam Gregory: WITNESS is a human rights organization. We work at the intersection of human rights, video and technology. Our core stakeholders are people on the front lines of truth, like civic journalists and human rights defenders, who are documenting everything from war crimes to state repression, to land rights issues, to critical public discussions that are happening in their communities worldwide.

WITNESS works globally. It’s a team of 50-plus people, working in Latin America, sub-Saharan Africa, the Middle East and North Africa, Southeast Asia, and then two focal countries: Brazil and the US. We have a global range of work that is deeply grounded in the experience of ordinary people, journalists and civil society activists who are trying to share accounts of what is happening, often via social media and messaging platforms.

One of the realizations we had about 15 years ago was that they were doing this within an increasingly broad infrastructure of how you create and share information, using social media platforms, search platforms and mobile telephony – all those structures that set the terms of how they did it. That made it really important to bring a strong, early voice into the development of emerging technologies coming from our experience of the realities of how people use these tools and share content in these critical situations.

That’s what brought us, five years ago, to the space of what people were calling deepfakes, and then synthetic media, and which now overlaps with what we describe as generative AI. We started working in the area of generative AI five years ago, as the earliest civil society organization to say: How are we going to approach this from a global human rights-led perspective? How do we ‘Prepare, Don’t Panic?’. We ground our work in very deep and direct consultation with journalists, activists, content creators and technologists around the world. I just came back from a meeting in Bogota that brought together constituents from Latin America, and we were in Nairobi two months ago.

Based on that, we then think: what are these frontline information actors identifying as risks here? How are they contextualizing it to their existing knowledge and experience? Because often they’re contextualizing it to existing issues, such as content moderation or the worries about government censorship or suppression. Then, what are the solutions they prioritize? Be it from authenticity and provenance approaches to media literacy and detection. Over the last five years, we’ve worked very closely on a range of technical standard issues, platform policy issues and discussions on what to prioritize, and ensure the needs of frontline defenders of truth are reflected in those.

Is WITNESS developing tools for these journalists? Or are you just consulting?

Sam Gregory: Many of my colleagues work very closely training and supporting specific communities, often helping them understand how to engage with something that’s changing, such as generative AI or the ways in which live streaming or war crimes documentation is happening at a local level. They support them to be as ethical and effective as possible in creating trustworthy narratives and documentation.

Then, other colleagues are focused on sharing good practices between communities. For example, a community in Myanmar learns from a community in Africa about a safe way to archive their material if there’s an internet shutdown, which is common in many settings. Then, there’s the work we do in the team that I directly led and now oversee, which is focused on the systemic technology infrastructure.

What we try to do is make sure these strands of work inform each other, so we’re very grounded when we talk about AI harms. We’re not talking abstractly: we’re talking because we’ve actually seen someone trying to deal with the problem. Most recently, we also ground our perspective in running a rapid response mechanism to analyze claims that a piece of content is a deepfake.

We do sometimes build tools. This links to how we relate to the critical space of authenticity and provenance. From the White House to the EU AI Act to major companies, there’s a focus on helping understand how you know where something came from, what role AI played in making it, and how you disclose this. A lot of the thinking in this space originally came out of the human rights movement.

About 12 or 13 years ago, a number of human rights groups started to build what are now known as authenticated capture tools for phones, which was the idea that you could add rich metadata to a video and hash it so you could prove that it hadn’t been tampered with. This came out of groups wanting to build tools to document war crimes in places like Syria.

We were pioneers in that space, building a tool called Proof Mode with a group called The Guardian Project. Often, we use the building of tools in our own work as a way to demonstrate reference designs that enable us to talk to much bigger players and demonstrate that we did this because we understood these challenges.

Tools form a part of our work, but a greater quantity involves advocating and engaging with much bigger structures that are going to have influence. For example, we built our own tool – the Proof Mode – which still exists and is used, but we’ve also been part of the technical standards work on the Coalition for Content Provenance and Authenticity, which is developing a metadata-based technical standard for how we could understand authenticity and provenance in media. So, we bring the experience from grassroots work and tools-building into the space of technical standards building and the way the platform policies are set.

At WebPurify, we recently conducted a US consumer survey, where 45% of respondents said they do not feel well-equipped to discern between human-generated and AI-generated content, and 14% do not feel well-equipped at all. Are you surprised by these figures?

Sam Gregory: I’m not surprised – I don’t feel well-equipped! A lot of the way we’ve been talking to people about AI-based content assumes they’re going to be able to spot these clues within it that will give it away. I think the people who are overly confident are the ones who know the current clues For example, if the hands are distorted, the voice sounds robotic, or there’s distortion on the forehead – and I’m very aware that those clues are just the current Achilles’ Heel of the algorithm.

I’m also super aware that in all the consultations we’ve run globally, people say they don’t feel well-equipped and don’t feel they have the tools – even journalists. And I don’t want to give people false hope by saying they’re going to visually or audibly spot AI-generated or edited content. So, that figure is very resonant. Without signals of provenance, better detection and good media literacy, I would not like to make a declaration that we’re going to be able to do this. We’re going to need a combination of these tools and approaches to address this, particularly as the quality of AI production improves as well as the ease and accessibility of usage.

Learn more about the work WITNESS is doing within the human rights and technology sectors at witness.org and gen-ai.witness.org.