Explainer: How Deepfakes Are Impacting Society

 

Actor Jordan Peele and his deepfake Barack Obama

Fake online video and audio content has become a powerful tool for spreading political misinformation and harming personal reputations.
By Morgan Currie.

 
 

Deepfakes are the result of machine-learning systems that manipulate the content of one piece of media by transferring it to another. The AI ingests video, photographs or audio of a person or object, then learns to mimic its behaviour and output the results onto another target person or object, creating an eerily accurate counterfeit. 

The deep-learning software used to make deepfakes has become cheap and accessible, raising questions about the potential for abuse. While there are plenty of examples that are benign and playful – Salvador Dali taking selfies with museum patrons, for instance – the origins of the technology show how harmful it can be. 

The term first became widely used in 2017, after a Reddit user by the name ‘Deepfakes’ posted pornographic videos featuring actresses whose faces were digitally altered to resemble female celebrities, such as Scarlett Johansson and Gal Gadot. For many, the videos crossed basic lines governing consent and harassment and showcased a potent new tool for revenge porn.1 That ‘Deepfakes’ used Google’s free open source machine-learning software also drove home how easily a hobbyist, or anyone with an interest in the technology, could masquerade falsehoods as reality.

Since then, other examples of disturbing deepfakes include a video of US House of Representatives Speaker Nancy Pelosi altered to make her sound drunk – it circulated widely after Donald Trump posted it and Facebook refused to take it down – and a video by two artists of Facebook CEO Mark Zuckerberg confessing that his company “really owns the future”. 

Actor Jordan Peele used a deepfake Barack Obama to warn of the dangers of deepfakes, highlighting how they can distort reality in ways that could undermine people’s faith in trusted media sources and incite toxic behaviour. And while the vast majority of deepfakes are of non-consensual porn not misinformation,2 some in the intelligence community have warned that foreign governments could spread deepfakes to disrupt or sway elections.3,4

Social media companies have started to address the deepfake dilemma – Facebook set up a public contest in 2019 to help it develop models to detect deepfakes and banned them in early 2020, in anticipation of the damage that could be done in an election year.5 Twitter now deletes reported deepfakes and blocks any of their publishers.

Governments are also putting forward laws to curb the technology. California passed a 2019 law banning deepfakes altogether, and in December 2020 the US Congress passed into law the Identifying Outputs of Generative Adversarial Networks Act.6

The entertainment industry has responded coolly to these protections, claiming too much oversight clamps down on free speech rights. In 2018, Walt Disney Company’s Vice President of Government Relations, Lisa Pitney, wrote that a proposed New York law that included controls on the use of "digital replicas", would "interfere with the right and ability of companies like ours to tell stories about real people and events. The public has an interest in those stories, and the First Amendment protects those who tell them.”7

Others feel such legislation is not going far enough. Existing laws put the burden on users to identify deepfakes, exonerating the platforms they circulate on. Social media companies remain exempt from regulations and no industry-wide standards currently exist, keeping them off the hook for now. 8

 

 

1. Alptraum, L., ‘Deepfake Porn Harms Adult Performers, Too’, January 2020, https://www.wired.com/story/deepfake-porn-harms-adult-performers-too

2. Cox, J., ‘Most Deepfakes Are Used for Creating Non-Consensual Porn, Not Fake News’, October 2019, https://www.vice.com/en/article/7x57v9/most-deepfakes-are-porn-harassment-not-fake-news

3. Beavers, O., ‘House Intelligence panel to examine 'deepfake' videos in June’, June 2019, https://thehill.com/policy/cybersecurity/446611-house-intel-to-examine-deepfake-videos-in-june

4. Shao, G., ‘Fake videos could be the next big problem in the 2020 elections’, October 2019, updated January 2020, https://www.cnbc.com/2019/10/15/deepfakes-could-be-problem-for-the-2020-election.html

5. Cole, S., ‘Facebook Just Announced $10 Million ‘Deepfakes Detection Challenge’’, September 2019, https://www.vice.com/en/article/8xwqp3/facebook-deepfake-detection-challenge-dataset

6. https://www.congress.gov/bill/116th-congress/senate-bill/2904

7. https://www.rightofpublicityroadmap.com/sites/default/files/pdfs/disney_opposition_letters_a8155b.pdf

8. Nonnecke, B., ‘California’s Anti-Deepfake Law Is Far Too Feeble’, November 2019, https://www.wired.com/story/opinion-californias-anti-deepfake-law-is-far-too-feeble/

 
Previous
Previous

New Real Research Projects: Supporting Recovery in the Cultural Sector

Next
Next

Artwork Profile: Mechanized Cacophonies – Nature Mediated