Longterm Wiki
Updated 2026-01-29HistoryData
Page StatusRisk
Edited 2 weeks ago1.5k words7 backlinks
55
QualityAdequate
62
ImportanceUseful
14
Structure14/15
8113024%14%
Updated every 6 weeksDue in 4 weeks
Summary

US government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibility undermines all evidence. Media literacy interventions show d=0.60 effect size, while C2PA content authentication provides medium-high promise for verification, though adoption rates remain uncertain (10-60% by 2027).

Issues2
QualityRated 55 but structure suggests 93 (underrated by 38 points)
Links8 links could use <R> components

AI-Driven Trust Decline

Risk

AI-Driven Trust Decline

US government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibility undermines all evidence. Media literacy interventions show d=0.60 effect size, while C2PA content authentication provides medium-high promise for verification, though adoption rates remain uncertain (10-60% by 2027).

SeverityMedium-high
Likelihoodhigh
Timeframe2025
MaturityGrowing
TypeEpistemic
StatusOngoing
Related
Risks
Epistemic CollapseAI DisinformationDeepfakes
1.5k words · 7 backlinks
Risk

AI-Driven Trust Decline

US government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibility undermines all evidence. Media literacy interventions show d=0.60 effect size, while C2PA content authentication provides medium-high promise for verification, though adoption rates remain uncertain (10-60% by 2027).

SeverityMedium-high
Likelihoodhigh
Timeframe2025
MaturityGrowing
TypeEpistemic
StatusOngoing
Related
Risks
Epistemic CollapseAI DisinformationDeepfakes
1.5k words · 7 backlinks

Quick Assessment

DimensionAssessmentEvidence
Current Trust LevelCritical (17-22% federal government trust)Pew Research Center 2025: down from 73% in 1958
Decline RateAccelerating55-point drop since 1958; 5-point decline 2024→2025 alone
AI AccelerationHigh500K deepfake videos shared on social media in 2023, projected 8M by 2025
Coordination ImpactSevereOnly 34% trust government to use AI responsibly (Edelman 2025)
ReversibilityLow (decades)Trust rebuilding requires sustained institutional reform over 10-20+ years
Intervention ReadinessMediumC2PA standard gaining traction; media literacy shows d=0.60 effect size
Cross-Domain RiskHighTrust collapse undermines pandemic response, climate action, AI governance

Overview

Trust erosion describes the active process of declining public confidence in institutions, experts, media, and verification systems. While the current state of societal trust is analyzed in the Societal Trust parameter page, this page focuses on trust erosion as a risk—examining the threat model, acceleration mechanisms, and responses.

For comprehensive data and analysis, see Societal Trust, which covers:

  • Current trust levels (US government trust: 77% in 1964 → 22% in 2024)
  • International comparisons and benchmarks
  • AI-driven acceleration mechanisms (liar's dividend, deepfakes, scale asymmetry)
  • Factors that increase trust (interventions, C2PA standards, media literacy)
  • Trajectory scenarios through 2030

Risk Assessment

DimensionAssessmentNotes
SeverityHighUndermines democratic governance, collective action on existential risks
LikelihoodVery HighAlready occurring; AI accelerating pre-existing trends
TimelineOngoingEffects visible now, intensifying over 2-5 years
TrendAcceleratingAI content generation scaling faster than verification capacity
ReversibilityDifficultRebuilding trust requires sustained effort over decades

Why Trust Erosion Is a Risk

Trust erosion threatens AI safety and existential risk response through several mechanisms:

DomainImpactEvidence
AI GovernanceRegulatory resistance, lab-government distrustOnly ≈40% trust government to regulate AI appropriately (OECD 2024)
ElectionsContested results, violence4 in 10 with high grievance approve hostile activism (Edelman 2025)
Public HealthPandemic response failureHealthcare trust dropped 30.4 pts during COVID-19
Climate ActionPolicy paralysisOnly ≈40% believe government will reduce emissions effectively
International CooperationTreaty verification failuresLiar's dividend undermines evidence-based agreements

The core dynamic: low trust prevents the coordination needed to address catastrophic risks, while AI capabilities make trust harder to maintain.


Causal Mechanisms

Loading diagram...

The diagram illustrates how AI-driven content generation combines with existing polarization and institutional failures to create compounding trust erosion through the liar's dividend (where synthetic media possibility undermines all evidence) and scale asymmetry (where misinformation production vastly outpaces verification capacity).


Historical Trust Trajectory

Trust erosion is not new, but AI capabilities threaten to accelerate existing trends dramatically:

PeriodUS Government TrustKey DriverAI Relevance
1958-196473-77%Post-WWII institutional confidenceNone
1965-198077% → 26%Vietnam War, WatergateNone
1980-200026-44%Economic growth, Cold War endNone
2001-200825-49%9/11 rally, Iraq War declineEarly internet
2009-202017-24%Financial crisis, polarizationSocial media amplification
2021-202517-22%Pandemic, election disputes, AI contentDeepfakes, LLM misinformation

Sources: Pew Research Center, Gallup


The AI Acceleration Factor

AI capabilities are fundamentally changing the trust erosion dynamic through several mechanisms:

Scale Asymmetry

The volume of synthetic content is growing exponentially:

  • 2023: 500,000+ deepfake videos shared on social media
  • 2025 projection: 8 million deepfake videos
  • Daily AI image generation: 34 million images/day via tools like DALL-E, Midjourney
  • Total since 2022: Over 15 billion AI-generated images created

This creates a fundamental asymmetry: misinformation can be produced faster than it can be verified, and the mere possibility of synthetic content undermines trust in authentic content (Atlantic Council Digital Forensics Lab).

Mass-Class Digital Divide

The 2025 Edelman Trust Barometer reveals a significant trust gap:

  • 71% of UK bottom income quartile feel they will be "left behind" by AI
  • 65% of US bottom income quartile share this concern
  • Only 1 in 4 non-managers regularly use AI vs. 2 in 3 managers

This creates a two-tier information environment where those with AI literacy can navigate synthetic content while others cannot, exacerbating existing inequality and trust divides.


Responses That Address This Risk

ResponseMechanismEffectivenessEvidence
Content AuthenticationCryptographic verification via C2PA standardMedium-HighFast-tracked to ISO 22144; adopted by Adobe, Microsoft, BBC
Epistemic InfrastructureFact-checking networks, verification tools (Vera.ai, WeVerify)MediumFact-checks reduce belief by 0.27 d (meta-analysis)
Epistemic SecurityPlatform policies, algorithmic demotion of misinformationMediumVariable by platform; X Community Notes shows promise
Deepfake DetectionAI-based detection tools, watermarkingMediumCat-and-mouse dynamic; detection lags generation by 6-18 months
Media Literacy ProgramsCritical evaluation training, prebunkingHighd=0.60 overall; d=1.04 for sharing reduction (Huang et al. 2024)

See Societal Trust for detailed intervention analysis.


Key Acceleration Mechanism: The Liar's Dividend

The most concerning AI-driven dynamic is the liar's dividend (Chesney & Citron): the mere possibility of fabricated evidence undermines trust in all evidence.

Research Findings

A landmark study published in the American Political Science Review (February 2025) by Schiff, Schiff, and Bueno administered five survey experiments to over 15,000 American adults:

FindingEffectImplication
Politicians claiming "fake news"Higher support than apologizingIncentivizes denialism
Effect crosses party linesBoth parties' supporters susceptibleNot limited to polarized base
Text vs. video evidenceLiar's dividend works for text, not videoVideo still retains credibility
MechanismInformational uncertainty + oppositional rallyingTwo distinct pathways

Key insight: The effect operates through two channels—creating informational uncertainty ("maybe it really is fake") and rallying supporters against perceived media attacks. Both strategies work independently.

Real-World Examples

CaseYearImpact
Slovakia election deepfake2023Fake audio of opposition leader discussing election rigging went viral days before election
Gabon coup attempt2019Claims that president's video was deepfake helped spur military coup attempt
Turkey election withdrawal2023Presidential candidate withdrew after explicit AI-generated videos spread
UK Keir Starmer audio2024Deepfake audio spread rapidly before being exposed as fabrication

This creates a double bind where neither belief nor disbelief in evidence can be rationally justified—and the effect will intensify as deepfake capabilities improve. According to a YouGov survey, 85% of Americans are "very" or "somewhat" concerned about misleading deepfakes.


Key Uncertainties

UncertaintyRangeImplications
Content authentication adoption rate10-60% of major platforms by 2027High adoption could restore verification; low adoption means continued erosion
AI detection keeping pace40-80% detection accuracyDetermines whether technical defenses remain viable
Trust recovery timeline10-30+ yearsShapes whether coordination for long-term risks is achievable
Generational divergence18-34: 59% AI trust vs. 55+: 18% (UK)May resolve naturally or create permanent trust gap
Institutional reform successUnknownTrust rebuilding requires demonstrable competence over sustained period

Crux Questions

  1. Can content authentication scale? The C2PA standard provides a technical solution, but adoption requires coordination across platforms, media organizations, and hardware manufacturers. If adoption reaches critical mass (estimated 40-60% of content), the liar's dividend may shrink.

  2. Will AI detection capabilities keep pace with generation? Currently, detection lags generation by 6-18 months. If this gap widens, technical verification becomes impossible; if it narrows, authentication systems become viable.

  3. Does media literacy scale? Individual interventions show d=0.60 effect size, but effects decay over time (PNAS study). Requires recurring reinforcement rather than one-time training.

Sources

Trust Data

Liar's Dividend Research

AI Misinformation

Interventions

Related Pages

Top Related Pages

Approaches

AI-Era Epistemic SecurityAI Governance Coordination Technologies

Risks

AI DisinformationAI-Accelerated Reality Fragmentation

Models

AI Media-Policy Feedback Loop ModelElectoral Impact Assessment ModelInstitutional AI Adaptation Speed Model

Policy

Compute MonitoringInternational Compute Regimes

Transition Model

Societal TrustStructural IndicatorsReality CoherenceCivilizational Competence

Concepts

AI GovernanceAI-Era Epistemic InfrastructureDeepfake DetectionAI Content AuthenticationSocietal TrustAI-Era Epistemic Security

Key Debates

AI Safety Solution Cruxes