Back
Chesney & Citron (2019)
webscholarship.law.bu.edu·scholarship.law.bu.edu/faculty_scholarship/640/
A foundational legal scholarship piece frequently cited in AI governance and policy discussions around synthetic media; relevant to AI safety communities concerned with misuse, deception, and the erosion of epistemic trust in the information ecosystem.
Metadata
Importance: 72/100journal articleprimary source
Summary
Chesney and Citron's seminal 2019 law review article examines the emerging threat of deepfake technology to privacy, democratic discourse, and national security. The paper analyzes how AI-generated synthetic media undermines trust in audiovisual evidence and proposes legal and technical countermeasures. It is widely cited as a foundational work in the legal and policy literature on synthetic media.
Key Points
- •Deepfakes pose serious threats to individuals (non-consensual imagery, reputational harm) and to society (political manipulation, disinformation campaigns).
- •Existing legal frameworks (defamation, fraud, evidence law) are poorly equipped to address AI-generated synthetic media at scale.
- •The technology creates a 'liar's dividend' where genuine media can be dismissed as fake, eroding trust in all digital evidence.
- •Authors propose a multi-layered response including platform liability reforms, criminal statutes, and technical authentication standards.
- •National security implications include foreign adversaries using deepfakes for influence operations and destabilizing democratic institutions.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI-Driven Legal Evidence Crisis | Risk | 43.0 |
| AI-Driven Trust Decline | Risk | 55.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20265 KB
"Deep Fakes: A Looming Challenge for Privacy, Democracy, and National S" by Robert Chesney and Danielle K. Citron
[Skip to main content](https://scholarship.law.bu.edu/faculty_scholarship/640/#main)
- [Home](https://scholarship.law.bu.edu/ "Home")
- [About](https://scholarship.law.bu.edu/about.html "About")
- [FAQ](https://scholarship.law.bu.edu/faq.html "FAQ")
- [My Account](https://scholarship.law.bu.edu/cgi/myaccount.cgi?context=faculty_scholarship "My Account")
- < [Previous](https://scholarship.law.bu.edu/faculty_scholarship/583)
- [Next](https://scholarship.law.bu.edu/faculty_scholarship/620) >
1. [Home](https://scholarship.law.bu.edu/)
>3. [Faculty Scholarship](https://scholarship.law.bu.edu/faculty_scholarship)
>5. [640](https://scholarship.law.bu.edu/faculty_scholarship/640)
## [Faculty Scholarship](https://scholarship.law.bu.edu/faculty_scholarship)
# [Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security](https://scholarship.law.bu.edu/cgi/viewcontent.cgi?article=1640&context=faculty_scholarship)
## Authors
[**Robert Chesney**, _University of Texas_](https://scholarship.law.bu.edu/do/search/?q=author%3A%22Robert%20Chesney%22&start=0&context=9601711)
[**Danielle K. Citron**, _Boston University School of Law_](https://scholarship.law.bu.edu/do/search/?q=%28author%3A%22Danielle%20K.%20Citron%22%20AND%20-bp_author_id%3A%5B%2A%20TO%20%2A%5D%29%20OR%20bp_author_id%3A%28%2280129a0d-6722-47e8-a3bc-3aff49d9c988%22%20OR%20%2223f9fc8c-f49b-43b4-a43b-7c51ad95f524%22%29&start=0&context=9601711) [Follow](https://network.bepress.com/api/follow/subscribe?user=YmNkMTVjYzM1ZDEyNzI2ZQ%3D%3D&institution=NTI4N2YyMjQ1MzRkODRiYg%3D%3D&format=html "Follow Danielle K. Citron")
## Author granted license
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
## Document Type
Article
## Publication Date
12-2019
## ISSN
0008-1221
## Publisher
University of California Berkeley School of Law
## Language
en-US
## Abstract
Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection. Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democr
... (truncated, 5 KB total)Resource ID:
ad6fe8bb9c2db0d9 | Stable ID: NDFiYTYwOD