Request a Demo Blog

Content moderation and human rights: Human Rights Center’s Alexa Koenig explores the connection

May 26, 2023 | UGC

In the age of information and digital connectivity, the internet has become an invaluable platform for global communication, idea sharing, and the exercise of freedom of expression. However, this boundless landscape of online content has also given rise to complex challenges, particularly in the realm of human rights.

As online platforms grapple with the task of safeguarding user safety and preventing the spread of harmful or illegal content, content moderation is playing an increasingly integral role in preserving human rights.

“I was a lawyer and a law professor for about five years, but then became very frustrated with the limits of law to make social change,” says Alexa Koenig, co-executive director at the Human Rights Center, based at the UC Berkeley School of Law.

“The first big portfolio that I was asked to engage on [when I joined the Human Rights Center as executive director] was thinking about how new and emerging technologies were transforming the practice of human rights,” she explains. “We were looking at things like how information posted to social media sites could be useful at international tribunals for proving what was happening in physical spaces. And also how increased accessibility to satellite imagery, drone technologies, the rise in machine learning and automated processes could help us understand what’s happening in the world, get accountability when human rights have been violated, and prevent human rights violations in the first place.”

Striking the right balance between preserving freedom of expression and ensuring user safety is not an easy task, but it is a crucial one in today’s interconnected world. Here, WebPurify’s VP of Trust & Safety Alexandra Popken spoke to Alexa to delve into the intricate web of human rights and content moderation, shedding light on the delicate balancing act faced by online platforms.

 

How should companies or people even begin to define human rights abuses in the context of online content?

Alexa Koenig: We really have two buckets of information that are circulating in online spaces that reflect human rights harms.

The first is where physical harms were perpetrated and videos or photographs got posted into online spaces. The second is where the digital space itself becomes the scene of an international crime or human rights wrong. And that would be things like the dissemination of hate speech or incitement to violence. One notable example would be how hate speech circulated with regards to the Rohingya in Myanmar and the way that potentially led to offline harms.

The core of the human rights framework is all about respecting individuals’ dignity. So how do we ensure that the digital spaces we’re all living, working and playing in are respectful, and recognize every person as having a package of rights?

That includes things like the right to non-discrimination. Do all people have access to the internet? How are they treated once they’re online? Are they being discriminated against on the basis, for example, of race, ethnicity, religion or gender?

Second, things like freedom of expression. Are people able to state and communicate what they feel desperately needs to be communicated? There are also issues of privacy. Whose privacy is respected on what basis? Do people have control over their own privacy? And access to information is another critical one.

One thing that puts the thumb on the scale, in favor of things being left up, is that we want people to have access to the information that they need to live their lives, or that will make their lives more fulfilling and enjoyable.

The challenge, of course, is that it can be difficult for companies tasked with operating these online spaces to know which of these human rights interests to prioritize.

So for example, whereas protecting someone’s privacy might suggest you want to take particular content down, having access to information may dictate in favor of leaving that up. And I think it’s really healthy and helpful for the communities that are most impacted to be part of the problem-solving when the companies and human rights practitioners talk about these things. Because these are not easy trade-offs to make, and the solutions are never going to satisfy everyone.

But I think together we can increasingly get to a place where the online systems we’re all part of become spaces that really fulfill and support human rights.

The human rights framework was initially set up to limit the power of governments to abuse the rights of individuals. Corporations were not seen as part of that framework. But as more and more of our daily lives are being lived online, there’s a growing need to figure out how corporations fit into that big picture. And if there are violations, there’s some form of remedy for that violation.

What responsibility do you think companies have with regard to content moderation in preventing human rights abuses online?

Alexa Koenig: I think just from a customer satisfaction and customer safety perspective, there’s a real need to make sure the content people are exposed to is not going to be overly harmful.

We’ve repeatedly found that what hurts or harms one person is very different from what might hurt or harm another. Again, that makes content moderation services very tricky and difficult. But thinking holistically about how we minimize the range of risks is critical.

Will content moderation have a bigger or smaller role to play in the future?

Alexa Koenig: A bigger role. We’re seeing a diversity of online platforms. We’re seeing them being used in more creative ways. And my hope is that there’s a growing sophistication in our ability to engage with content moderation.

Content moderation at its best is a partnership between machines and humans. We need humans to determine the framework around what should stay up and what should come down. But we also need tools that automate these processes at scale. And we need to ensure they’re being used in ways that uphold human rights: that are non-discriminatory, that allow for freedom of expression and access to information, but also take down the information that’s inciting violence or causing other forms of harm.

We also need to bring in more of a cultural perspective to some of the content moderation policies. That’s hard and it can be expensive, but thinking about things like the coded language that communities use; who, in terms of age, gender, and geography is using the tools in what ways.

Hopefully, that will help us become ever more fine-grained in thinking about what we leave up and what we take down, but also providing more options than this binary. When can we give the user control over what they have exposure to, and what they access? More control over minimizing some of the content that they find upsetting, but still allowing them to get the information they need?

How can companies ensure they’re building their products responsibly and designing policies that are consistent with human rights laws and norms?

Alexa Koenig: I’m a huge fan of creating incredibly diverse teams. What a woman from one part of the world will find potentially harmful may differ tremendously from a man in a very different part of the world. So thinking about how you get as many insights into the design of your content moderation policies as possible is really going to give a valuable set of insights for companies to work with.

With social media companies, one way I’ve seen this be really effective is not just in hiring diverse employees, which is critically important, but also turning to the public for feedback on policies before they’re launched. Really consulting those diverse populations of people.

Alexa Koenig: And then I’d also say, adding to the science. So you’ve got the general public who has this breadth of lived experience they can contribute to providing insights, and then staying on top of the scholarly research. And of course, companies doing their own research to better understand different phenomena and how to offset the harms.

How do you strike that balance between moderation while enabling free expression?

Alexa Koenig: I think the constant delicate balance is figuring out who gets to inform those decisions. So is it someone on the ground in Syria who is being bombarded? Or someone on the ground in Ukraine? How this information is potentially beneficial from a human rights perspective to leave it up, versus the stuff that’s being used to harm because it’s a form of propaganda; or being used to manipulate international narratives. That’s not easy.

So challenging. So how can civil society organizations and governments hold technology companies accountable for their content moderation practices? We’re seeing an increase in regulation; not a ton in the US, more in the EU.

Alexa Koenig: I think it has to be an ongoing dialogue all the time between civil society organizations, who are watching out for the people they have a mandate and mission to protect, and companies, so they’re aware and it’s on their radar.

With governments, really thinking about potential levers from a regulatory perspective. It can be tricky to know what analogy to use to think about what’s the right framework for working with companies.

I feel that the tech industry acknowledges more regulation is needed, but when you’re viewing some of Congress’s questions, as they’re grilling tech companies, it’s like, wait a minute, they don’t even understand what they’re talking about. And there’s really a deep-seated frustration there.

Alexa Koenig: You can’t have an informed conversation and build for the future if you’re using the same words, but meaning very different things. So that means finding common ground and having the humility to say, ‘I don’t understand X, can you walk me through this?’ Too often we’re all trying to prove that we know what we’re talking about without taking a moment to say, ‘I’m curious about that. Can you explain more?’

What do you think the future of content moderation and human rights will look like? And what steps should be taken to ensure that it’s a positive one?

Alexa Koenig: Better computer and human interaction is a big piece of it. I think right now, given the scale of information circulating in online spaces, humans have done the best they can to stay on top of manual moderation processes. But humans on their own can’t even begin to touch the full scope of the challenge; what they can tackle on their own is only a drop in the ocean. We need machines to identify potentially problematic content and bring things down to human scale. That way, humans can better understand what’s actually happening, and then refine the digital technologies to be more responsive to the nuances.

The future might even be content moderation per person. Instead of content moderation across an entire platform, maybe the company works in partnership with the individual to set the parameters around exposure, in ways that are much more interactive and engaged.

It’s also about bringing the decisions closer to home. So as we already talked about making sure that more and more of the communities that are impacted really do have a way to engage directly with companies on their experiences.

For example, I’m thinking about work that we did in a country that was about to have elections – not the United States – where it was so helpful to talk with people on the ground about how social media was being used by different constituencies in ways that would have been really invisible to someone like me from Berkeley, California. The terminology, the subtle nuances, the context would have gone right over my head.

 

Will new technologies like generative AI, VR and AR impact your work?

Alexa Koenig: Augmented reality and virtual reality are going to be huge in the human rights space. Right now, you’ve got investigators who go to the site where an atrocity happened in the middle of a war zone or whatever, and they’re taking photographs or video and they’re submitting it as evidence. What we’re beginning to see now is immersing the decision maker, the fact finder, whether it’s a judge or a jury, at the scene of the crime or at the seat of the human rights violation to help them better understand what actually happened.

Now that also raises huge new risks in a couple of ways. One is the traditional garbage-in, garbage-out problem. If those systems are being designed with limited data sets that might distort what actually took place, our confidence that we really understand what happened may go way up, whereas our accuracy may stay the same or go down in ways that are deeply problematic.

The other challenge is, if you think about the psychosocial part of what we’re being exposed to right now with videos and photographs, you immerse someone in that experience, whether it’s an attack, you know, a bombing run or whatever, because it’s so emotionally immersive and so sensorially immersive, we have the tremendous risk of even greater harms.

Of course, we’re going to have all the human negative behaviors in digital spaces that we’ve always had in physical spaces. It was years ago that we saw the first sexual violations in virtual reality. As the technology gets better and better, it’s just going to become more and more realistic and violations conducted in those spaces should already be recognized as violations. We don’t necessarily need new legal frameworks to better capture crimes that are happening in virtual reality, but we do need to think creatively about existing frameworks’ applications.

As practitioners, we often think about harms that happen in physical space that are documented in online spaces. But then we’ve also got the harms that happen in online spaces. Now we’ll have the harms that happen in virtual reality.

A lot of people are still at the point where if something happens in digital space it’s not quote-unquote ‘real’. I think that perspective is rapidly dissolving and should be considered to have dissolved already. We’ve got to get ahead of this and start planning for the future we already know is coming. We can take the advances we’ve seen over the last decade in video and online spaces and game them out to these new forms of interaction. It’s on us if we don’t.

We at WebPurify actually moderate the metaverse. We have moderators in these virtual worlds, moderating behavior. And there is that interesting spatial element of how someone gets up in your space, or virtually hugs you, in a way that makes you uncomfortable. And we just haven’t seen that with traditional forms of media. I think that’s going to be an increasing challenge as VR/AR expand.

Alexa Koenig: Well, I like the idea of putting a safety bubble around the moderator when they go into these virtual spaces, so that they can at least feel that their body has autonomy and that they’re safe, or for the people who are using those functions, what security functions are deployed. I think all of it’s fascinating. It shows both the strengths and weaknesses of being human today and it really will take the whole community to figure out some of the solutions.

Lastly, I’m curious to get your thoughts on these large language, generative AI models. How should the companies both creating these models but also integrating them into their platforms, be thinking about moderation or preventing human rights abuses? Is that something that, you know, the Human Rights Center is focused on currently?

Alexa Koenig: It’s something that we know we need to be focused on for the future. Similar to deep fakes or other forms of synthetic video, I think helping the world understand when something is generated by an algorithm versus when something is generated by a human still matters.

So are there ways for companies, as they are making advancements in these technologies, to signpost when something is a machine versus a human? I think we need to think really carefully about how we do that. So we don’t just collapse these worlds into each other, in ways that have harms that we can’t see coming.

While content moderators are helping safeguard the rights of internet users, who’s looking out for their rights? In part two of our interview with Alexa, we examine why content moderators have human rights, too, and how to protect them as well.

Content moderation and human rightsRead Part 2 of Alex’s interview with Alexa

Stay ahead in Trust & Safety! Subscribe for expert insights, top moderation strategies, and the latest best practices.