What happens when misinformation spreads as fast as a hurricane?
October 16, 2024 | UGCWhen natural disasters like Hurricanes Helene and Milton ravage parts of the US, the devastation isn’t limited to the physical world. Online platforms, where millions turn for critical updates, often become flooded with misinformation, graphic content, and heated debates that can spiral out of control.
As misinformation spreads faster than the storm itself, many may be left wondering: are online platforms playing their role in protecting people?
The reality is that Trust & Safety teams – and their content moderation partners, like us at WebPurify – shift into high gear, responding almost instantly to maintain the integrity of information circulating online.
At WebPurify, we see ourselves as first responders of the internet, ensuring the information users see is accurate and safe.
Below are 5 critical considerations our team prioritizes when navigating these turbulent events.
1. Surfacing Credible Information and Relief Services
During Hurricanes Helene and Milton, relief organizations like the Red Cross and FEMA actively used their social channels to surface credible information. These organizations connect with platforms to amplify official messaging and ensure accurate information about shelters, recovery efforts, and safety precautions reach affected populations. Platforms then work to boost this information algorithmically and within their products. Oftentimes, there are moderation and curation teams on the back end, sifting through hundreds of thousands of posts to ensure credible information gets surfaced at the right time to the right audiences.
2. Removing the Spread of Misinformation
Natural disasters often open the floodgates to misinformation, and Hurricanes Helene and Milton were so awash with online falsehoods that FEMA responded with a Hurricane Rumor Response tracker to counter the spread of misleading content. Our moderation team must stay on top of these trends, filtering out misinformation before it can cause offline harm. Misinformation that we enforced included claims that Democrats were controlling the weather to AI-generated images depicting false scenes of the disasters.
3. Labeling Sensitive Content
In addition to moderating misinformation that could cause harm, our moderation team must also vigilantly monitor for sensitive and graphic content, such as images of deceased individuals, particularly in emotionally-charged scenarios like natural disasters. Although platforms may permit such content if it serves a public interest or is newsworthy, they often implement warning interstitials to shield users from unexpected exposure. While automated systems are increasingly capable of detecting this type of content, human oversight is still critical for making nuanced decisions and ensuring the appropriate level of care is applied.
4. Ensuring Brand Safety and Responsible Monetization
Content moderation standards for user-generated content often extend to advertising, but the expectations are even higher when it comes to ads. Platforms are especially cautious about avoiding the perception that they are profiting from a tragic event. This means inappropriate ads must be carefully reviewed and removed. Examples range from graphic content to brands that attempt to exploit vulnerable audiences by promoting products in a way that capitalizes on the tragedy.
Beyond the content of the ads themselves, it’s crucial to consider the context in which they appear. Picture an ad for a brand showing up next to harmful misinformation about a natural disaster, or imagine a user searching for disaster relief information only to be met with a commercial product ad in the results. This is not only insensitive but can damage both the platform’s reputation and the advertiser’s image. That’s why moderation teams must carefully evaluate both the ads and their placement, preventing ads from appearing in inappropriate or exploitative contexts.
5. Maintaining Civil Discourse Online
Natural disasters can be politicized, sparking heated debates on topics like climate change or whether individuals should evacuate or stay in affected areas. These conversations can quickly escalate, leading to harassment, abuse, or even the incitement of violence. Moderating this type of content requires careful balance. The goal is to allow open and genuine discourse, including disagreement, while ensuring that harmful rhetoric is kept in check. Moderators must enforce community standards that encourage respectful debate without allowing conversations to devolve into toxic or dangerous exchanges, maintaining a safe and civil space for all users.