Skip to content
Longterm Wiki
Back

Deepfakes, Elections, and Shrinking the Liar’s Dividend

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Brennan Center for Justice

Published by the Brennan Center as part of its 'AI and Democracy' series, this essay is relevant to AI governance discussions around information integrity, election security, and the societal risks of synthetic media.

Metadata

Importance: 52/100organizational reportanalysis

Summary

This Brennan Center essay examines how public awareness of AI deepfakes paradoxically enables bad actors to falsely disclaim authentic content as fake—a dynamic called the 'liar's dividend.' It analyzes politicians' incentives to exploit this confusion and proposes countermeasures including provenance verification technology, media literacy, and anti-deception norms to preserve democratic epistemic foundations.

Key Points

  • 85% of Americans are concerned about AI deepfakes, but this awareness itself enables the 'liar's dividend'—falsely claiming real content is AI-generated to avoid accountability.
  • The liar's dividend (coined by Chesney & Citron) becomes more potent as deepfake sophistication grows, making false denials of authentic content more believable.
  • Politicians may use outright denial, indirect uncertainty-raising, or surrogate claims to exploit deepfake confusion rather than direct accountability.
  • Countermeasures include content provenance technology (cryptographic watermarking), improved media literacy, and established social norms against false artificiality claims.
  • The essay situates deepfake manipulation within three technical approaches: face swap, lip sync, and puppet master techniques.

Cited by 2 pages

PageTypeQuality
Epistemic CollapseRisk49.0
AI-Driven Trust DeclineRisk55.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202638 KB
[Skip Navigation](https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend#main)

In August 2023, the survey firm YouGov asked Americans how concerned they are about various potential consequences arising from artificial intelligence (AI). [Topping the list](https://today.yougov.com/technology/articles/46058-majorities-americans-are-concerned-about-spread-ai), 85 percent of respondents said that they are “very concerned” or “somewhat concerned” about the spread of misleading video and audio deepfakes. This finding is unsurprising given frequent [news](https://nypost.com/2023/06/14/ai-deepfakes-poised-to-wreak-havoc-on-2024-election-experts/) [headlines](https://www.reuters.com/world/us/deepfaking-it-americas-2024-election-collides-with-ai-boom-2023-05-30/#:~:text=Welcome%20to%20America's%202024%20presidential,polarized%20world%20of%20U.S.%20politics.) such as “AI ‘Deepfakes’ Poised to Wreak Havoc on 2024 Election” and “Deepfaking It: America’s 2024 Election Collides with AI Boom.” As the introduction to the [AI and Democracy](https://www.brennancenter.org/series/ai-and-democracy) essay series notes, “increasing awareness of the power of artificial intelligence coincides with growing public anxiety about the future of democracy.”

### Part of

[Image\\
\\
![AI and Democracy](https://www.brennancenter.org/sites/default/files/styles/featured_item_1630x932_small_1x/public/2023-10/2023_06_Burnett_Chris_TheFirstAiElection_1630x932.jpeg?h=1c316bc6&itok=X1FItmaG)](https://www.brennancenter.org/series/ai-and-democracy)

#### [AI and Democracy](https://www.brennancenter.org/series/ai-and-democracy)

Problematically, however, _concern_ about deepfakes poses a threat of its own: unscrupulous public figures or stakeholders can use this heightened awareness to falsely claim that legitimate audio content or video footage is artificially generated and fake. Law professors Bobby Chesney and Danielle Citron call this dynamic the [liar’s dividend](https://www.californialawreview.org/print/deep-fakes-a-looming-challenge-for-privacy-democracy-and-national-security). They posit that liars aiming to avoid accountability will become more believable as the public becomes more educated about the threats posed by deepfakes. The theory is simple: when people learn that deepfakes are increasingly realistic, false claims that real content is AI-generated become more persuasive too.

This essay explores these would-be liars’ incentives and disincentives to better understand when they might falsely claim artificiality, and the interventions that can render those claims less effective. Politicians will presumably continue to use the threat of deepfakes to try to avoid accountability for real actions, but that outcome need not upend democracy’s epistemic foundations. Establishing norms against these lies, further developing and disseminating technology to determine audiovisual content’s provenance, and bolstering the public’s capaci

... (truncated, 38 KB total)
Resource ID: 5494083a1717fed7 | Stable ID: N2MyNDI2Nz