New research from WebPurify highlights AI-generated content’s impact on consumer habits.October 10, 2023 | UGC
A new WebPurify survey highlighted the public’s increasing awareness of AI’s growing presence on their favorite digital platforms – and they have concerns.
AI-generated content isn’t new, but its capabilities have expanded exponentially in recent years. From generating realistic images and video to writing coherent articles, AI is becoming a powerful tool for content creation. But it isn’t stopping here, so how prepared is the world to take on its challenges? WebPurify’s VP of Trust & Safety, Alexandra Popken, spoke to Sam Gregory, award-winning journalist, Executive Director at WITNESS and an expert on generative AI to find out.
A survey conducted by Censuswide of 1000 US national representatives for WebPurify showed that over 9 in 10 (92%) respondents aged 35-44 agree* with the statement: ‘The widespread presence of Artificial Intelligence-generated content impacts my trust of what I read and see online.’
- 3 in 5 (62%) consumers have noticed an increase in the amount of AI-generated content on the platforms they use, including deepfakes and other fake content, with nearly 2 in 5 (37%) noticing a large increase.
- Over 2 in 5 (45%) respondents do not feel well-equipped to discern between human-generated and AI-generated content, with 1 in 7 (14%) who do not feel well-equipped at all.
- 7 in 10 (70%) respondents agree with the statement: ‘It is a platform’s (website or app) responsibility to detect and remove harmful AI-generated content, such as deepfakes,’ with three-quarters (75%) of respondents believing more should be done to protect users from potential risks of AI-generated content.
But whose responsibility is it to ‘solve’ this authenticity challenge: the platforms, developers, or consumers? And what do platforms and developers stand to gain in implementing changes early?
Download the ebook for the full results, and to hear expert insights on the answers to these questions.
Here, WebPurify shares five ways that platforms can prepare:
- Assess your risks and ethical obligations.
Sam suggests that the people who face disproportionate harm from generative AI are vulnerable and marginalized communities. A human rights impact assessment and resulting responses can provide a helpful and actionable framework for platforms to follow to ensure they’re protecting these groups. Furthermore, “platforms integrating AI into their products or even using AI to enforce against AI should consider establishing ethical standards around consumer privacy and security, data model transparency and unbiased training process, regulatory compliance and appropriate use,” Alex adds.
- Update your policies to incorporate AI-generated content.
“Ensure community guidelines reflect appropriate and inappropriate uses of AI-generated content,” suggests Alex. “For example, consider prohibiting things like synthetic and manipulated media that are used to deceive, confuse, or harm users.” Additionally, if you’re integrating generative AI into your platform or product, consider implementing internal guidelines that you hold yourselves accountable to.
- Invest in automated and manual content moderation.
Alex suggests implementing robust content review and moderation systems to ensure that AI-generated content adheres to platform guidelines. “It’s also important to partner with a content moderation provider that is up-to-date on this technology’s risks and flexible in their moderation approach,” she adds.
- Educate and engage with your users.
Engage with your user base to gather feedback and address concerns related to AI-generated content. Educate your users about AI-generated content, its limitations, and potential risks. And where possible, equip users with tools and signals to help them discern between AI-generated and human-created content.
- Partner with your peer set.
“The reality is that platforms are going to face an uptick in harmful AI-generated content – whether that’s deepfakes circulating on social media platforms, sophisticated scams in online dating, or malicious phishing attacks powered by AI,” Alex cautions. “To the extent that platforms can signal-share, it will make the industry writ large better prepared for the challenges that lie ahead.” Alex also recommends that platforms consider joining industry-wide consortiums geared towards best practices and standards around generative AI use and moderation.