Request a Demo Blog

Watching out for the watchers: How content moderators can stay safe from harm too

June 2, 2023 | UGC

Content moderators are increasingly at the front line, safeguarding people’s human rights, and insulating us from the worst of humanity: against violence, abuse, and the incitement of hatred. But here, we reflect on the impact that’s having on the content moderators themselves: what about their rights, and mental health.

If content moderators spend every working day examining harmful content, it stands to reason they may be harmed themselves.

Alexa Koenig, co-executive director at the Human Rights Center, based at the UC Berkeley School of Law, has dedicated research to the subject – her upcoming book Graphic Trauma and Meaning in Our Online Lives, will be published by Cambridge University Press in June – and here, speaks to WebPurify’s VP of Trust & Safety Alexandra Popken about her findings.

Watching out for the watchers: How content moderators can stay safe from harm too

You have a forthcoming book about how viewing disturbing content online can affect society. Can you tell us a little about that?

Alexa Koenig: My co-author Andrea Lampros and I realized that in our day-to-day work in human rights, we were engaging with more and more digital content. We set up what’s called the Investigations Lab here at UC Berkeley’s Human Rights Center to train students how to find information related to human rights violations online, and how to verify its accuracy. So if something claimed to be a video reflecting a human rights violation of Myanmar, is this actually Myanmar? Was there actually a violation?

In the course of that work, our students were being exposed to really graphic content themselves. If you think about what users put up online, they’re often putting up the very most graphic incidents they can to try and grab the public’s attention. So we began doing a lot of training of our students, building off the pioneering work of others, on how to protect themselves from the impact of that material.

What we began to realize as users of the internet ourselves, and as parents of children who engage with the internet sometimes in very different ways than we do, was that the insights we were getting as human rights practitioners could potentially be quite valuable for the general public. And there are some tools of the trade or tricks of the trade we thought would be helpful to share.

First of all, thinking about awareness. As a user, for example: what is my baseline functioning? How much am I sleeping? How much am I eating, how much am I drinking? When does that begin to slip? And is it possible that that change is correlated with my degree of online use, what I’m exposing myself to online, etc.?

It’s also about awareness of other people. Let’s say I find something really inspiring or motivating from a human rights perspective, and I share it with someone else, not realizing that person will find this video or photograph or post upsetting. For example, maybe they’ve had a personal experience with the kind of violence that I shared. Being aware of each other’s needs really helps us know what to forward, what to not forward, when and how.

In addition, there are tips and tricks to help users protect themselves from some of the worst online content, and help companies design their systems to protect their users better. And we also found that community is really important. Trying to think critically about when you’re engaging with stuff online and it’s graphic; is there someone you can talk with about it? Is there something that you can do in response to your outrage or your upset? That translation of the passive ingestion of graphic material into action can be incredibly empowering, even if that action is talking to people about what you’ve witnessed and deepening the social bonds that we all have.

As part of this research, we had a chance to talk to and study the work of people from all over the world. And one of the researchers whose work I found particularly helpful was Martin Seligman.

He’s been working with the US military for a very long time, trying to figure out how to mitigate the harms of exposure to trauma. And what he’s shown is that there’s a bell curve to how people respond when exposed to potentially traumatic material. The majority of people at the top of the bell curve will be affected for a few months. They may not sleep as well, they may have nightmares, they may increase their drinking or whatever, but will return to a baseline of functioning in a relatively short time.

There’s a much smaller group of people who’ll be permanently affected by what they’ve been exposed to. And then there’s another small group of people who will experience something called post-traumatic growth. So they’ll actually end up stronger as a result of their experience and the processing of that experience.

Making conscious choices around what to expose yourself to is also something people can do for themselves. If they’ve heard, for example, that there’s a video circulating out there of some horrific event, do you really need to see that? Or is it better to engage with it by reading a news article or through some other format that’s less visceral and emotive than the raw footage?

Executives walking inside gears at sea at dawn demonstrate the power of cooperation and synergy.

That’s particularly salient when we’re talking about content moderation because moderators can be exposed to quite graphic and gruesome content; for example, child sexual exploitation. We try to have a multi-pronged approach to moderator wellness that includes being transparent about the work that moderators are going to be exposed to when they sign up for the job, and giving them the option of opting out if they feel as though that content will be too mentally taxing.

We also make sure that we’re embedding tools that reduce unnecessary exposure to harmful content. So things as simple as blurring certain types of content that still give you the ability to make an accurate decision without full throttle exposure. Grayscale has even been really helpful for images in reducing adverse effects. We also provide a number of professional counseling services and such. Are there any other tips that you would recommend to companies who are grappling with their employees being exposed to sensitive content?

Alexa Koenig: As a human rights practitioner, we’re always thinking about how to trick the brain into thinking that you are not actually experiencing what it is you’re watching. And there are a ton of ways to do that.

One person we interviewed for our book explained that it’s like when you watch a movie and the sound is just a little bit off from the visual. You’re irritated because it breaks that spell and makes you realize: I’m just watching a video, this is not real life. Even if it really happened to someone, that can be very protective. So you mentioned grayscale and blurring. Those are all things that as human rights investigators we do manually when companies don’t provide that as an option.

We also sometimes need a way to turn that grayscale or that blurring off so that we can investigate specific details. It’s about thinking about how many opportunities you’re giving your users and your customers to have control over their experience with the material they’re engaged with.

There’s a psychologist in Turkey named Metin Başoğlu, and he’s done a lot of work with survivors of human rights violations, as has his wife. And the two of them have come up with this theory that anxiety really comes from a lack of control, and depression comes from a lack of hope. So when I’m trying to design a system or a process that can protect the individual, I’ll be thinking about: how do I maximize this person’s control to reduce their anxiety? And how do I maximize their optimism that something positive is going to come out of their engagement with upsetting content?

Because that hope can be really protective around depression. So whether companies are thinking about their own moderators or they’re thinking about the public, giving them as many entry points and as many ways to engage with the material as possible is really important.

Another insight from the human rights community is the power of audio. If we know we’re going to watch an upsetting video, we’ll turn the sound off, or way down. Because so much of the powerful content is in someone pleading for their life, for example, and that’s what often hurts. So can we get more control over that piece of it? Also eliminating the element of surprise. It’s good when someone can get a heads up in advance – ‘This video may show a rape, or it may show a decapitation’ – either through automation or through partnerships with other people on a team who mark graphic content on a spreadsheet. Or if you can scroll thumbnails through a video first, so you can prepare your brain for what’s coming.

And I’m really glad to hear about therapy and other entry points. For some people, peer counseling and having another person who understands what they’re going through may be the most powerful mechanism, as opposed to a professional. Some others may want space from their colleagues and actually have a different way to engage with care. For others, it might be a wellness program. It’s really hard to know what will most benefit each person so providing a range of options for care is incredibly important.

You mention scrolling thumbnails of a video as a tactic for reducing surprise. I forgot to mention that another tooling capability we’ve found really powerful is the ability to storyboard videos, so you can see the sequence of events but in a way that is less suspenseful and therefore impactful.

We also make sure our moderators understand the impact of their efforts. So having them feel really empowered by the fact that, in viewing child sexual exploitation, they’re actually putting predators behind bars. And we can give them the actual numbers. When you’re seeing egregious content, but you’re in a position to remove it and remediate it, that is really impactful. It goes back to giving someone a sense of hope, or the feeling that what they’re doing matters.

Alexa Koenig: Yeah, we interviewed a number of content moderators for our book and that was something that we heard repeatedly. I’ve talked to some, at a platform that shall remain nameless, where feedback really wasn’t given and the platform’s employees were so hungry for it. They were asking us as an outside research organization if we could help them better understand what happened with the information that they were sharing with law enforcement. I think it’s such a critical function.

Looking at the incentive structures of big platforms, I also think it’s so important to have an organization that specializes in content moderation services, that can be constantly iterating on and thinking about how to do that as effectively and ethically and impactfully as possible. Because many companies have so many other concerns on their plate, so it can be really difficult for them to do it in-house without more support.