Skip to content
Longterm Wiki
Back

The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth

paper

Author

Emilio Ferrara

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Ferrara's analysis of generative AI's epistemic risks, focusing on synthetic realities and the erosion of trust and institutional verification practices—a critical concern for understanding how AI systems undermine information integrity and shared epistemic foundations.

Paper Details

Citations
2
1 influential
Year
2026
Methodology
peer-reviewed
Categories
Future Internet

Metadata

arxiv preprintanalysis

Summary

This paper by Emilio Ferrara argues that generative AI's most consequential risk extends beyond isolated deepfakes or misinformation to the creation of 'synthetic realities'—coherent, interactive information environments where content, identity, and social interaction are jointly manufactured. The core concern is the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content becomes cheap and easy to produce at scale while remaining difficult to audit. The paper formalizes a layered taxonomy of synthetic reality risks, documents recent real-world manifestations (2023-2025) across fraud, elections, and supply chains, and proposes a mitigation framework centered on provenance infrastructure, platform governance, institutional redesign, and public resilience. The paper concludes with the 'Generative AI Paradox': as synthetic media becomes ubiquitous, societies may rationally discount all digital evidence, fundamentally raising the cost of truth for democratic and economic institutions.

Cited by 1 page

PageTypeQuality
AI Trust Cascade FailureRisk55.0

Cached Content Preview

HTTP 200Fetched Feb 22, 202650 KB
The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth 
 
 
 
 

 
 

 
 
 
 
 The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth

 
 
 Emilio Ferrara 1,2,3,∗ 
 
 
 (
 1 Thomas Lord Department of Computer Science, University of Southern California (USC)
 2 Annenberg School for Communication, University of Southern California (USC), Los Angeles, CA, USA
 3 Information Sciences Institute (ISI), University of Southern California (USC), Marina del Rey, CA, USA
 
 ∗ Correspondence: emiliofe@usc.edu 
 ) 
 
 Abstract

 Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as “deepfakes” or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities ; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023–2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security . We conclude with the Generative AI Paradox : as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether, raising the cost of truth for everyday life and for democratic and economic institutions.

 
 
 Keywords: artificial intelligence; generative AI; information verification; epistemic security

 
 
 
 
 
 
 
 
 
 Figure 1: (Top Left) In January 2024, the r/StableDiffusion community on Reddit demonstrated a proof-of-concept workflow to synthetically generate personas and (Bottom Left) proofs of identity. (Top Right) GenAI can produce lifelike depictions of never-occurred events (MJv5 prompt: 

... (truncated, 50 KB total)
Resource ID: b1d1daed71579f2b | Stable ID: MmJhNTE3MD