Request a Demo Blog

“Content moderation is a positioning tool”: Dr. Yi Liu on the marketing case for content moderation

July 12, 2023 | UGC

Content moderation isn’t just a debate centered around “freedom of expression, political discourse, personal liberty, civil society, and government regulations.” It is also a potent tool that platforms with vested interests can wield to propel their revenue growth, making it a critical component of their “marketing decisions.” Such is the compelling argument advanced in the 2021 paper, Implications of Revenue Models and Technology for Content Moderation Strategies, co-authored by Yi Liu from the University of Wisconsin–Madison, and Pinar Yildirim and Z. John Zhang from The Wharton School, University of Pennsylvania.

With 17 years of experience in the content moderation industry, WebPurify understands that businesses approach content moderation differently depending on their aims. We had the opportunity to engage with Dr. Liu in a stimulating conversation about his research and the invaluable perspectives it offers on marketing, technology, and policymaking.

What was the motivation for your research?

The idea for this paper came from seeing offensive content being posted on social media sites, observing how people responded to that and then noting how the platforms responded in turn. My background is in marketing and technology, so I started to wonder whether it was a mistake that people were seeing those offensive posts, like a reflection of imperfect content moderation technology, or if this was something that the sites were choosing to allow. I thought it was a possibility that this is the equilibrium.

As I was refining the concept, I made a conscious choice to factor in the revenue models of social media platforms. The approach one takes, be it centering on user traffic—the sheer number of eyeballs—or enhancing the value that users derive from the platform, potentially justifying a pay-for-access model, results in vastly different incentives.

We had to narrow the focus, of course, so our research only looks at content moderation done through technology, not by humans. In our study, we consider the standards for determining the strictness or leniency of a content moderation policy. Essentially, we’re trying to determine the threshold—how extreme is the speech you’re allowing? And, when offensive content is posted, is its removal a consistent action or does it vary?

Why did you choose to look at social media platforms?

These social media platforms, at the end of the day, they’re businesses and they need to make money. So when you think about economics, and how platforms can make money, there are just the two basic models. One is through advertising, but as you can see nowadays this type of revenue model has been challenged, especially given privacy concerns and the question of whether people are less likely to be persuaded by advertisements. So now, social media platforms have the second option to go subscription-based.

And how does the approach to content moderation affect how a platform is viewed?

We like to say that marketing shouldn’t be equated with advertising; instead, it’s more about strategic positioning. For instance, a platform that rigorously moderates content positions itself as a safe haven for those prioritizing safety. Conversely, a platform asserting its lack of content moderation will likely appeal to those cherishing freedom of speech. Essentially, this presents a dilemma, as it’s impossible to satisfy all preferences.

How do social media platforms with advertising-based revenue models use content moderation?

Platforms that depend on advertising-based revenue models strive to amass a large and diverse user base. Their aim is to strike a balance between those who staunchly advocate for freedom of speech and those who are sensitive to extreme content. The majority of users occupy the middle ground, where explicit violent imagery or hate speech is generally unwelcome. Employing content moderation tools to prune such posts may result in the loss of some users, but it enhances the platform’s overall appeal to a broader audience.

Interestingly, under an advertising revenue model, platforms do have some motivation to moderate content. However, we found that they lack significant incentives to implement impeccably accurate content moderation technology. When moderation is too precise, it risks alienating more radical users who, despite their views, still represent potential revenue. To avoid completely deterring these users, platforms might employ less precise moderation technology that allows certain borderline posts to slip through.

By doing so, they maintain some of the more radical users, while also retaining the majority of moderate users. The platform may not be a perfect fit for these moderate users, but given the free access, they may be willing to make the trade-off.

How do social media platforms with subscription-based revenue models use content moderation?

What we found was that, compared to an advertising-based platform, a subscription-based platform is less likely to conduct content moderation. But if it does moderate content, it will be more aggressive. So there are two levels: do you moderate? And if so, how strict are you?

If an existing platform moves to a subscription-based model, then the subscription fee tends to screen out those kinds of users who are really sensitive to extreme content. If you’re already not happy about seeing some kind of sensitive posts there, and then suddenly they ask you to pay $10, you’ll be the first to drop it. But the users who don’t care as much about this are more likely to stay, so you don’t have much incentive to invest in content moderation.

Essentially, implementing a subscription fee enables you to categorize user types. It’s not just about boosting user numbers, but also about enhancing the utility they receive, which is the reason they pay you. You’re in a position then to appeal to users who place high value on safety, to the extent that they’re willing to pay for it. If these users indeed prioritize safety, it’s in these platforms’ interest to apply more precise and stringent content moderation than what’s offered on advertising-based platforms. Otherwise, they might simply opt for the free alternatives. Consequently, subscription-based sites have a solid incentive to invest in superior, more accurate technology.

Is there anything else that might affect how important active, accurate content moderation is for a platform?

One critical factor to consider is whether your user base leans more towards posting or reading content. Users who are inclined to post frequently typically value their freedom of speech, while those predominantly interested in reading often prioritize safety. Therefore, a platform can broaden its customer base through more precise content moderation, which eliminates extreme content and users, subsequently enhancing its overall reading utility. In other words, if an advertising-based platform knows that the majority of its users value reading utility over posting utility, it has a clear incentive to moderate content. If not, the motivation may be lacking.

Are platforms that moderate content more aggressively less ‘extreme’ overall?

It’s not always as straightforward. Policymakers sometimes assess platforms based on the frequency of hate speech removal, once users have flagged it. If it’s often deleted, they might categorize that platform as less extreme. However, relying solely on the quantity of content removed may be problematic, as it’s essential to consider the volume of content the platform initially hosts. To clarify, if your content moderation is highly stringent, you may have already deterred certain extreme users from your platform. It’s not that you have explicitly banned them—they’ve self-selected out.

In this scenario, you are actually demonstrating a commendable performance in content moderation if the objective is to minimize extreme content posted on your platform. But the quantity of content you moderate might be minimal because there’s simply less extreme content present to begin with. That’s why we emphasize that one cannot solely measure a platform’s extremity by the amount of content removed. A robust moderation strategy may preemptively discourage certain users from even registering on your platform.

What’s the main thing you’d like people to take away from your research?

Firstly, a content moderation strategy isn’t solely guided by social objectives. It also serves as a positioning instrument that can be leveraged for marketing purposes, driven by profit considerations. Moreover, it’s a tool designed to reinforce what your users—and by extension, your platform—deem important. If your content moderation strategy aligns with these goals, it allows your platform to be both successful and socially responsible.

Secondly, it’s crucial to take into account the diverse types of platforms, as each will necessitate a unique optimal approach to content moderation. This is an essential factor from a policy standpoint. If regulatory measures are needed—perhaps to steer platforms towards operating more like social planners (an economic term signifying a platform striving to maximize total utility for all)—a ‘one size fits all’ strategy won’t suffice. The revenue models of the platforms—whether subscription-based or advertising-based—must be factored in, as they will require different regulations.

In what areas would you like to see further research conducted?

In our study, we have assumed an even spread of people across the spectrum of extremes. So we encourage readers to explore how our models would perform under a normal distribution. Given that most people gravitate towards the middle rather than the extremes, this might further incentivize platforms to precisely curate and remove more extreme content, since fewer users are interested in it.

Also, our focus was solely on technology-based content moderation. However, in reality, it’s common to see a combination of AI flagging content, which is then reviewed by human content moderators. When humans are involved in content moderation services, accuracy will naturally improve because we comprehend these nuances far better than machines. Therefore, future research could consider not only the costs but also the heightened accuracy offered by incorporating human moderators.