Request a Demo Blog

The Solution to Deepfakes Remains Unclear

July 5, 2019 | Video Moderation, UGC


 

While the aim of leveraging artificial intelligence is to make operations efficient and insightful, there are undoubtedly more nefarious applications of this fast-advancing technology. Deepfakes, one such application, present a threat grave enough that companies and lawmakers alike are frantically looking for a solution. However, content moderation with regard to identifying deepfakes is not only proving difficult, it’s also calling into question free speech on the internet.

The Problem with Deepfakes

The word “deepfake” – borrowing from the AI subset “deep learning” – refers to a video that has been manipulated to appear as authentic. AI makes it possible to mimic human speech patterns and mannerisms in order to make someone appear to say or do something they didn’t in actuality say or do.

For example, in June of this year, a deepfake of Mark Zuckerberg appearing to say, “whoever controls the data, controls the future” surfaced. Although the video convinced few, some assert that even easily detectable fakes can have a significant impact. Siwei Lyu, a director of a computer-vision lab at SUNY Albany, is concerned about the greater psychological effect that manipulated media may have on society. She says, “It’s generating an illusion. It can wreak a lot of damage. It’s very hard to remove. And it can come from anywhere. With the Internet, all the boundaries are becoming blurred.”

The blurred boundaries Lyu mentions are precisely what make this technology so insidious. Deepfakes have the power to shape perception while creating doubt at the same time. That is, people may start suspecting legitimate videos of being doctored if the proliferation of manipulated videos grows. Furthermore, guilty parties can claim that video evidence against them is inauthentic.

How Content Moderation Will Be Impacted

The potential harm caused by deepfakes has called into question the power that popular social media platforms have with regard to content moderation. Is it their prerogative to police manipulated videos or does doing so signify the first domino falling in an escalation of internet censorship? As it turns out, these tech companies don’t all see it the same way.

Facebook acknowledges the importance of moderating misinformation, but also doesn’t have anything in their policy about requiring posts to be true. Conversely, YouTube’s policy stipulates, “Spam, scams, and other deceptive practices that take advantage of the YouTube community aren’t allowed on YouTube.” The video hosting platform says they are working on a solution to the problem by combining AI moderation and human review.

Twitter, like Facebook, has acknowledged the danger of “manipulative tactics,” while admitting that it’s not possible to fact check every tweet. What’s more, the company has announced that it wants to avoiding setting any precedents around moderating true or untrue content.

What Is Being Done?

If the major social media platforms aren’t taking control of the situation, what is being done? Many recognize that the way forward is to fight fire with fire. Deep learning can be used to analyze footage and identify inconsistencies, exposing fakes. However, one major problem is that the incentive to detect deepfakes is not as great as the incentive to synthesize them. First of all, it’s not particularly lucrative. Secondly, the uneven ratio means manipulators can remain steps ahead of detectors. And lastly, funding for research has been limited. The Defense Advanced Research Projects Agency, or DARPA, has been the primary sponsor of research around deepfakes.

Experts with the DARPA program consider detecting fakes “defensive technology” integral to national security. But with the 2020 election next year, it’s looking as though those working on the defensive side of this technology won’t be ready to fully solve the problem of deepfakes misleading the electorate and affecting campaigns.

Congress Weighs In

Because deepfakes are often designed to influence political opinion, and are therefore seen as a threat to democracy, Congress is determining whether or not legislation should be passed that enforces liability.

On June 13th, the House Intelligence Committee met for the first time to discuss deepfakes. Experts in the field were asked to give their opinion about changing regulations to hold publishers liable for video content found to be libelous, defamatory, or fraudulent.

The problem that arises is, regulation of this kind would undo the immunity that Section 230 of the Communications Decency Act of 1996 gives publishers against harmful content posted on their sites. This change to a bipartisan effort would counter its original intention of “[encouraging] the unfettered and unregulated development of free speech on the internet.”

“If the social media companies can’t exercise the proper standard of care when it comes to a variety of fraudulent or illicit content, then we have to think about whether that immunity still makes sense,” says Adam Schiff, Chair of the House Intelligence Committee.

The popular position in Congress is that companies should update their content moderation practices, but there are some representatives who see the potential pitfalls of painting with too broad a brush and overlooking the intended effect of a given video. For example, the difference between parody and propaganda further complicates the issue. Does it make sense to penalize a publisher for an obvious joke the same way they would be penalized for a video whose sole purpose is creating discord?

Stay Tuned

 The discussion around deepfakes raises several big questions about free speech and the responsibility of publishers. Manipulating a video in a way that fools an audience is an indication of how sophisticated AI has become. However, technology with the power to do widespread harm might just warrant regulation.

Whether or not Congress takes action, it’s likely the discussion around deepfakes and content moderation will evolve quickly in the near future.