Skip to content
Longterm Wiki
Back

The Liar’s Dividend: Deepfakes, synthetic media, and the cybersecurity disinformation crisis | Biometric Update

web

Relevant to AI safety discussions around misuse of generative AI, particularly how synthetic media undermines epistemic infrastructure and complicates efforts to verify authentic information in high-stakes contexts.

Metadata

Importance: 42/100news articleanalysis

Summary

This article explores the 'liar's dividend' phenomenon, where the existence of deepfakes and synthetic media allows bad actors to deny authentic evidence as fake, compounding disinformation risks beyond direct deception. It examines how AI-generated synthetic media is reshaping cybersecurity threats and undermining epistemic trust in digital communications. The piece situates these challenges within broader concerns about verification, authenticity, and institutional resilience.

Key Points

  • The 'liar's dividend' refers to how deepfake technology lets people plausibly deny real evidence, not just fabricate false evidence.
  • Synthetic media creates a dual threat: direct disinformation through fakes and indirect erosion of trust in authentic content.
  • Cybersecurity frameworks must evolve to address AI-generated content as both an attack vector and a tool for deniability.
  • Biometric and digital verification systems face increasing pressure as deepfakes become more sophisticated and accessible.
  • Societal and institutional resilience to disinformation requires technical, policy, and educational responses working in concert.

Cited by 1 page

PageTypeQuality
Epistemic CollapseRisk49.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202618 KB
The Liar’s Dividend: Deepfakes, synthetic media, and the cybersecurity disinformation crisis | Biometric Update 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 

 

 
 
 

 

 
 
 
 
 
 
 
 

 
 
 

 
 

 
 
 
 Share 
 

 
 X/Tweet 
 

 
 Link 
 

 
 Comment 
 

 
 
 
 
 The Liar’s Dividend: Deepfakes, synthetic media, and the cybersecurity disinformation crisis

 
 
 Aug 6, 2025, 12:40 pm EDT | 
 James R. McQuiggan 
 
 
 Categories 
 Biometrics News | Industry Insights 
 
 
 
 
 
 
 By  James R. McQuiggan, CISSP, SACP 

 I was watching a news show over the weekend and they were discussing the concept of AI Slop.  I think this will end up in Webster’s dictionary in the coming year, but it is characterized as images, video, text that are created by Generative AI and used in society, social media, that is blurring the lines very quickly between truth and fiction. With the monthly improvements and increases in generative AI, synthetic media in images, audio, and video, this is not only testing our ability to distinguish between real from what’s not, but it also fuels a dangerous loop known as “The Liar’s Dividend.” This concept, when paired with sophisticated AI-driven media, is on a path to disrupt our trust in media, society, and cybersecurity. 

 
 
 
 
 
 What is the Liar’s Dividend?

 The Liar’s Dividend describes the advantage gained by those who spread false information in an environment flooded with misinformation. Lies travel faster than the truth, and truth is always trying to keep up. As synthetic media increases, it becomes easier for scammers to plant doubt about what is real, even when confronted with authentic evidence. The phrase was first articulated by legal scholars Bobby Chesney and Danielle Citron , who noted that as people lose trust in digital media such as images or videos, liars can dismiss inconvenient truths as “fake news” or AI-generated deepfakes, dodging consequences and accountability. 

 How it works

 With the improved quality of deepfakes and other synthetic media becoming widespread, people are starting to grow skeptical about what they see online. While the technology is at its worst capabilities at this point in time, it is improving over time.  

 With plausible deniability, public figures or criminals can now claim genuine footage is fake and misdirecting public perception. There have been fake hands or fingers sold online  and when worn, can make people see and automatically claim are fake because the number of fingers is wrong, thus dispelling the legitimacy of an image or video, without any further analysis. 

 Ultimately, when truth itself becomes subjective, malicious actors gain greater freedom to mislead, manipulate, and evade justice. The stakes are increasing as the basis for informed decision-making and societal norms is threatened when real evidence can be easily dismissed

... (truncated, 18 KB total)
Resource ID: 094219a46adde1cf | Stable ID: OTI5ZjU4ND