Brand safety and suitability: how content moderation can save an advertiser’s reputation
April 19, 2024 | UGCContent moderation is an essential step in making sure advertising connects with the right audience. It’s not just its role in quality control and policing legal requirements that can make a difference, but its capacity to shield organizations from brand safety risks.
These risks have expanded exponentially in the digital age, and failing to moderate effectively can exact a heavy toll on both reputation and revenue for online media companies and their advertisers.
It’s why we’ve seen a number of high-profile advertising boycotts in recent years. In 2017, for example, PepsiCo, Walmart, Starbucks and a string of other big-name brands pulled their ads from YouTube due to concerns over Google running ads alongside objectionable videos.
More than 1,200 companies and brands made the decision to pause advertising on Facebook for the whole of July 2020 for the ‘Stop Hate for Profit’ ad boycott, while Elon Musk’s infamous profane message to advertisers boycotting X over antisemitic content was reported to cost the company potentially as much as $75 million for the quarter.
The message is clear: brands could be damaged if their marketing messages are seen alongside unsafe content – and they will take action to prevent this if a platform doesn’t.
Download our eBook: The Unseen Side of Advertising – the critical role of content moderation in modern advertising
Brand safety vs brand suitability
As you might know, brand safety is essentially a baseline – it consists of content that a platform should never serve ads adjacent to or allow to be monetized in any way.
“Back when this concept, at least as it applies in the digital age, was first getting off the ground circa 2016-2017, the primary focus of it was very narrow in scope,” explains AJ Brown, COO at Brand Safety Institute and former Head of Brand Safety and Ad Quality at Twitter.
“It was: ‘How does one avoid having advertising messaging associated with undesirable contexts?’ And that can be interpreted in a lot of different ways online.”
While brand safety is concerned with setting boundaries, brand suitability is focused on enabling advertisers to express a preference with regard to the risk of their brand appearing next to certain types of content. This doesn’t necessarily mean overtly harmful content that’s likely to offend – it could simply be news or content related to competitors that certain brands may not wish to associate with.
A relatively new set of guidelines presented by the Global Alliance for Responsible Media (GARM) is helping advertisers and platforms to navigate brand safety and brand suitability issues.
“We talk a lot about ‘floors’ in my line of work, and that’s largely attributable to the GARM,” Brown says. “Back in 2019, when Twitter was a founding member of GARM, one of the first things that we did was adopt and refine an industry-wide set of standards for content that was unsafe for monetization.
“The GARM Brand Safety Floor is the standard for content that’s never acceptable for ad-supported monetization at all. You can be more restrictive above that floor – that’s entering the brand suitability space – but you can’t go below it.”
In terms of moderation, the buck stops with the platform when it comes to the enforcement of platform-wide community standards and brand safety policies, but brand suitability decisions lie with the advertiser.
To that end, advertisers need to be given the tools to tailor their messaging in the context in which they show up beyond the policy floors enforced by the platform, says Brown.
“Utilizing those tools is the advertiser’s responsibility – it’s not the platform’s job to implement a given brand’s suitability preferences on their behalf, nor is it their job to enforce that brand’s preferences across the platform for all ads and all content. And that is often a conversation that people who do brand safety work need to have.
“For example, an airline may not want their ads near a news article about a plane making an emergency landing, or a candy brand may want to avoid adjacency to a study about obesity drugs. They should be empowered to make those avoidance decisions for themselves, but it doesn’t mean that the platform should bar that news article or research study from monetization categorically.”
Content moderation for brand safety
Platforms maintain a wide range of policies, many of which layer atop one other. A platform may have separate policies and enforcement operations for brand safety, ad quality, and other business integrity policies. All of these “money” policies, however, are supplemental to the enforcement of platform-wide content policies, also known as community standards.
Brown highlights that a brand safety policy will be more conservative than a platform-wide policy: “Platform-wide policies will stipulate things that are fundamental to everyone who uses a platform, such as not allowing illegal activity.
“Brand safety policies are a level above that in stipulating types of content that are always going to be inappropriate for ad-supported monetization, but which, depending on the platform, might be allowed to exist on the platform without being monetized.
“A notable example of content for which platform policies differ across the ecosystem is pornography and adult content. This content is not allowed on certain parts of the internet, but is permitted on others.
“However, whether or not adult content is permitted on a platform, the industry has agreed that it’s generally not appropriate for advertising to be placed alongside this content. Advertising might still exist on a site that allows it, but the content needs to exist in a part of the site that does not have ads associated with it.”
Brown points to other areas of concern, such as violent and graphic content. “It might be in the public’s interest to be able to access, say, footage of a war as a matter of public interest. But that isn’t something that any ad or advertiser should be associated with. If a platform’s community standards don’t prohibit a given type of brand-unsafe content, then its brand safety policies step in to ensure that the content in question is not monetized.”
Damaging a social media platform’s reputation
Advertisers obviously have skin in the game, but why should platforms and users care about brand safety?
The cynical view is that brand safety is the preservation of advertiser feelings, Brown suggests: “But on the whole it is in everyone’s best interest to cultivate environments where people’s experiences with ads are positive ones. Advertising is, after all, the primary means by which the internet remains open and free to access.”
He highlights the importance of building out monetization practices that are sustainable. “You don’t want to create environments where people feel disincentivized from coming to a platform or participating in the conversation because of how they experience monetized products on your platform.
“People could be searching for real-time trending news on a global conflict, for example. You need to have an understanding that when users seek that sort of information out, they probably don’t want to see ads interrupting their experience.
“It’s not a good look for the platform to appear to be trying to profit from that kind of experience. It’s also not a good look for the advertiser whose ad one might see alongside war footage. Subconsciously or consciously, people may go to another service to find their information where ads aren’t served in these environments.”
No matter where you sit in the industry, it’s in your best interest to care about this practice, Brown adds: “Think of it more as responsible monetization and less about how to protect the sentiments of sensitive advertisers.”
The future of brand safety and content moderation
So what is the next step for brand safety? How will the industry evolve? Brown sees an ever-broadening set of the direct and indirect impacts of advertising as key considerations for the future.
“I think the overarching theme, which holds true both for the evolution of brand safety to this point and for its future, is that the number of considerations facing brand safety practitioners that factor into these conversations have increased significantly and they’re going to continue to do so.
“It’s not just about whether your ad can be screenshot next to objectionable content anymore. It’s about the kinds of environments that you cultivate or support, the voices that you amplify, the stances your brand or your platform takes.”
Maintaining the underlying foundation of contextual brand safety is important, Brown adds. “We’ve seen recent examples of what happens when that goes wrong. But the conversation has shifted to what we can build atop a strong brand safety floor that allows advertising to be a force for good.”
Content moderation services continues to evolve too. While automated solutions are required to keep pace with the 24/7 content cycle, human moderation is vital for reviewing advertising and monetizable pieces of content.
As Brown points out, brand safety and brand suitability are so subjective that an automated solution cannot reliably determine at scale where a given piece of content falls under the GARM Brand Safety Floor + Suitability Framework.
“You can’t ask a single machine learning model to reliably tell you whether a given piece of content is either adult or misinformation or violent or profane or sensitive. Some of these areas are highly subjective, but even if they weren’t, you’re also asking the model to achieve a very broad mission and it’s going to be very unlikely that it can do that reliably at scale,” Brown says. “You need to build a robust suite of models to even get close to enforcing brand safety reliably through automated means.”
Fundamentally, we are dealing with human sentiment, Brown concludes, and that, in the process of developing a brand safety and suitability strategy, you’re effectively anthropomorphizing your brand.
“You’re answering questions about what your brand is and isn’t comfortable with as if it’s a person with its own unique point of view and values. And that’s really hard to develop clear and consistently enforceable criteria for, especially when you add in things like cultural and linguistic nuance.”
Brand issues can stray into gray areas, for which there is no binary yes/no answer, he adds. “Two different people looking at the same piece of content might take very entrenched opposite views of whether or not something violates a policy. That dynamic often arises between platforms and advertisers, and there’s not always going to be an empirically correct solution on which we can train machines to enforce.
“That’s why I believe there’s always going to be a need for human involvement.”