Back
Paris & Donovan (2019)
webdatasociety.net·datasociety.net/library/deepfakes-and-cheap-fakes/
A foundational policy-oriented report from Data & Society relevant to AI governance discussions about synthetic media, misinformation, and the erosion of epistemic trust in audiovisual content.
Metadata
Importance: 62/100organizational reportanalysis
Summary
This Data & Society report by Paris and Donovan examines the spectrum of manipulated media, from sophisticated AI-generated deepfakes to simpler 'cheap fakes' produced with basic editing tools. It analyzes how these technologies threaten the integrity of audiovisual evidence and public trust in media. The report provides a framework for understanding media manipulation and its political and social consequences.
Key Points
- •Introduces the concept of 'cheap fakes'—manipulated media created with accessible tools like slowing/speeding video—alongside AI-generated deepfakes
- •Argues that low-tech manipulations may pose equal or greater societal risk than deepfakes due to their accessibility and plausible deniability
- •Examines how manipulated media undermines trust in authentic audiovisual evidence, creating an 'liar's dividend' for bad actors
- •Provides a taxonomy of media manipulation techniques ranging from simple decontextualization to full synthetic generation
- •Calls for platform accountability, media literacy interventions, and policy responses to address manipulated media at scale
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Driven Legal Evidence Crisis | Risk | 43.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20263 KB
In _Deepfakes and Cheap Fakes_, Data & Society Affiliates Britt Paris and Joan Donovan trace decades of **audiovisual (AV) manipulation** to demonstrate how **evolving technologies aid consolidations of power** in society. Deepfakes, they find, are no new threat to democracy.
“New media technologies do not inherently change how evidence works in society. What they do is provide new opportunities for the negotiation of expertise, and therefore power.”
_— Britt Paris and Joan Donovan_
Coining the term “cheap fakes,” Paris and Donovan demonstrate that the creation of successfully deceptive media has never necessarily required advanced processing technologies, such as today’s machine learning tools. A “ **deepfake**” is a video that has been altered through some form of machine learning to “hybridize or generate human bodies and faces,” whereas a “ **cheap fake**” is an AV manipulation created with cheaper, more accessible software (or, none at all). Cheap fakes can be rendered through Photoshop, lookalikes, re-contextualizing footage, speeding, or slowing.
Thanks to social media, both kinds of AV manipulation can now be spread at unprecedented speeds. For a spectrum diagram, [**click here**](https://datasociety.net/output/deepfakes-and-cheap-fakes/#spectrum).
Like many past media technologies, deepfakes and cheap fakes have jolted traditional rules around evidence and truth, and trusted institutions must step in to redefine those boundaries. This process, however, risks a select few experts gaining “juridical, economic, or discursive power,” thus further entrenching social, political, and cultural hierarchies. Those without the power to negotiate truth–including people of color, women, and the LGBTQA+ community–will be left vulnerable to increased harms, say the authors.
Paris and Donovan argue that we need more than an exclusively technological approach to address the threats of deep and cheap fakes. Any solution must take into account both the history of evidence and the “social processes that produce truth” so that the power of expertise does not lie only in the hands of a few and reinforce structural inequality, but rather, is distributed amongst at risk-communities. “Media requires social work for it to be considered as evidence,” they write.

- [Download the spectrum](https://datasociety.net/wp-content/uploads/2019/09/Deep_fakes_spectrum.png)
By using lookalike stand-ins, or relabeling footage of one event as another, media creators can easily manipulate an audience’s interpretations.
- [Download](https://datasociety.net/wp-content/uploads/2019/09/DS_Deepfakes_Cheap_FakesFinal-1.pdf)
report
Deepfakes and Cheap Fakes
Britt ParisJoan Donovan
###### Correction: The Zuckerberg deepfake was created by artists Bill Posters and Daniel Howe in partnership with Israeli startup CannyAI. The pdf has been changed to r
... (truncated, 3 KB total)Resource ID:
a26bee6d5c3d7dcb | Stable ID: NjVjMTM0Mz