AI-Driven Trust Decline
AI-Driven Trust Decline
US government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibility undermines all evidence. Media literacy interventions show d=0.60 effect size, while C2PA content authentication provides medium-high promise for verification, though adoption rates remain uncertain (10-60% by 2027).
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Current Trust Level | Critical (17-22% federal government trust) | Pew Research Center 2025: down from 73% in 1958 |
| Decline Rate | Accelerating | 55-point drop since 1958; 5-point decline 2024→2025 alone |
| AI Acceleration | High | 500K deepfake videos shared on social media in 2023, projected 8M by 2025 |
| Coordination Impact | Severe | Only 34% trust government to use AI responsibly (Edelman 2025) |
| Reversibility | Low (decades) | Trust rebuilding requires sustained institutional reform over 10-20+ years |
| Intervention Readiness | Medium | C2PA standard gaining traction; media literacy shows d=0.60 effect size |
| Cross-Domain Risk | High | Trust collapse undermines pandemic response, climate action, AI governance |
Overview
Trust erosion describes the active process of declining public confidence in institutions, experts, media, and verification systems. While the current state of societal trust is analyzed in the Societal Trust parameter page, this page focuses on trust erosion as a risk—examining the threat model, acceleration mechanisms, and responses.
For comprehensive data and analysis, see Societal Trust, which covers:
- Current trust levels (US government trust: 77% in 1964 → 22% in 2024)
- International comparisons and benchmarks
- AI-driven acceleration mechanisms (liar's dividend, deepfakes, scale asymmetry)
- Factors that increase trust (interventions, C2PA standards, media literacy)
- Trajectory scenarios through 2030
Risk Assessment
| Dimension | Assessment | Notes |
|---|---|---|
| Severity | High | Undermines democratic governance, collective action on existential risks |
| Likelihood | Very High | Already occurring; AI accelerating pre-existing trends |
| Timeline | Ongoing | Effects visible now, intensifying over 2-5 years |
| Trend | Accelerating | AI content generation scaling faster than verification capacity |
| Reversibility | Difficult | Rebuilding trust requires sustained effort over decades |
Why Trust Erosion Is a Risk
Trust erosion threatens AI safety and existential risk response through several mechanisms:
| Domain | Impact | Evidence |
|---|---|---|
| AI Governance | Regulatory resistance, lab-government distrust | Only ≈40% trust government to regulate AI appropriately (OECD 2024) |
| Elections | Contested results, violence | 4 in 10 with high grievance approve hostile activism (Edelman 2025↗🔗 web★★★☆☆Edelman2024 Edelman Trust BarometerRelevant to AI governance discussions as a large-scale empirical measure of public trust in institutions managing transformative technologies like AI; useful for understanding the social legitimacy challenges facing AI deployment and regulation.The 2024 Edelman Trust Barometer surveyed global populations to reveal a paradox where rapid innovation—including AI—risks deepening societal distrust and political polarization...governancepolicyai-safetydeployment+4Source ↗) |
| Public Health | Pandemic response failure | Healthcare trust dropped 30.4 pts during COVID-19 |
| Climate Action | Policy paralysis | Only ≈40% believe government will reduce emissions effectively |
| International Cooperation | Treaty verification failures | Liar's dividend undermines evidence-based agreements |
The core dynamic: low trust prevents the coordination needed to address catastrophic risks, while AI capabilities make trust harder to maintain.
Causal Mechanisms
Diagram (loading…)
flowchart TD
subgraph Drivers["Trust Erosion Drivers"]
AI[AI Content Generation]
POLAR[Political Polarization]
INST[Institutional Failures]
end
subgraph Mechanisms["Key Mechanisms"]
LIAR[Liar's Dividend]
SCALE[Scale Asymmetry]
AUTH[Authentication Gaps]
end
subgraph Outcomes["Systemic Impacts"]
GOV[Governance Paralysis]
COORD[Coordination Failure]
CASCADE[Trust Cascade]
end
AI --> LIAR
AI --> SCALE
POLAR --> LIAR
INST --> AUTH
LIAR --> GOV
SCALE --> AUTH
AUTH --> COORD
GOV --> CASCADE
COORD --> CASCADE
CASCADE --> XRISK[Existential Risk Response Failure]
style AI fill:#ffcccc
style CASCADE fill:#ffcccc
style XRISK fill:#ff9999
style GOV fill:#ffe6cc
style COORD fill:#ffe6ccThe diagram illustrates how AI-driven content generation combines with existing polarization and institutional failures to create compounding trust erosion through the liar's dividend (where synthetic media possibility undermines all evidence) and scale asymmetry (where misinformation production vastly outpaces verification capacity).
Historical Trust Trajectory
Trust erosion is not new, but AI capabilities threaten to accelerate existing trends dramatically:
| Period | US Government Trust | Key Driver | AI Relevance |
|---|---|---|---|
| 1958-1964 | 73-77% | Post-WWII institutional confidence | None |
| 1965-1980 | 77% → 26% | Vietnam War, Watergate | None |
| 1980-2000 | 26-44% | Economic growth, Cold War end | None |
| 2001-2008 | 25-49% | 9/11 rally, Iraq War decline | Early internet |
| 2009-2020 | 17-24% | Financial crisis, polarization | Social media amplification |
| 2021-2025 | 17-22% | Pandemic, election disputes, AI content | Deepfakes, LLM misinformation |
Sources: Pew Research Center, Gallup
The AI Acceleration Factor
AI capabilities are fundamentally changing the trust erosion dynamic through several mechanisms:
Scale Asymmetry
The volume of synthetic content is growing exponentially:
- 2023: 500,000+ deepfake videos shared on social media
- 2025 projection: 8 million deepfake videos
- Daily AI image generation: 34 million images/day via tools like DALL-E, Midjourney
- Total since 2022: Over 15 billion AI-generated images created
This creates a fundamental asymmetry: misinformation can be produced faster than it can be verified, and the mere possibility of synthetic content undermines trust in authentic content (Atlantic Council Digital Forensics Lab).
Mass-Class Digital Divide
The 2025 Edelman Trust Barometer reveals a significant trust gap:
- 71% of UK bottom income quartile feel they will be "left behind" by AI
- 65% of US bottom income quartile share this concern
- Only 1 in 4 non-managers regularly use AI vs. 2 in 3 managers
This creates a two-tier information environment where those with AI literacy can navigate synthetic content while others cannot, exacerbating existing inequality and trust divides.
Responses That Address This Risk
| Response | Mechanism | Effectiveness | Evidence |
|---|---|---|---|
| Content Authentication | Cryptographic verification via C2PA standard | Medium-High | Fast-tracked to ISO 22144; adopted by Adobe, Microsoft, BBC |
| Epistemic Infrastructure | Fact-checking networks, verification tools (Vera.ai, WeVerify) | Medium | Fact-checks reduce belief by 0.27 d (meta-analysis) |
| Epistemic Security | Platform policies, algorithmic demotion of misinformation | Medium | Variable by platform; X Community Notes shows promise |
| Deepfake Detection | AI-based detection tools, watermarking | Medium | Cat-and-mouse dynamic; detection lags generation by 6-18 months |
| Media Literacy Programs | Critical evaluation training, prebunking | High | d=0.60 overall; d=1.04 for sharing reduction (Huang et al. 2024) |
See Societal Trust for detailed intervention analysis.
Key Acceleration Mechanism: The Liar's Dividend
The most concerning AI-driven dynamic is the liar's dividend (Chesney & Citron↗🔗 webChesney & Citron (2019)A foundational legal scholarship piece frequently cited in AI governance and policy discussions around synthetic media; relevant to AI safety communities concerned with misuse, deception, and the erosion of epistemic trust in the information ecosystem.Chesney and Citron's seminal 2019 law review article examines the emerging threat of deepfake technology to privacy, democratic discourse, and national security. The paper analy...governancepolicydeploymentai-safety+5Source ↗): the mere possibility of fabricated evidence undermines trust in all evidence.
Research Findings
A landmark study published in the American Political Science Review (February 2025) by Schiff, Schiff, and Bueno administered five survey experiments to over 15,000 American adults:
| Finding | Effect | Implication |
|---|---|---|
| Politicians claiming "fake news" | Higher support than apologizing | Incentivizes denialism |
| Effect crosses party lines | Both parties' supporters susceptible | Not limited to polarized base |
| Text vs. video evidence | Liar's dividend works for text, not video | Video still retains credibility |
| Mechanism | Informational uncertainty + oppositional rallying | Two distinct pathways |
Key insight: The effect operates through two channels—creating informational uncertainty ("maybe it really is fake") and rallying supporters against perceived media attacks. Both strategies work independently.
Real-World Examples
| Case | Year | Impact |
|---|---|---|
| Slovakia election deepfake | 2023 | Fake audio of opposition leader discussing election rigging went viral days before election |
| Gabon coup attempt | 2019 | Claims that president's video was deepfake helped spur military coup attempt |
| Turkey election withdrawal | 2023 | Presidential candidate withdrew after explicit AI-generated videos spread |
| UK Keir Starmer audio | 2024 | Deepfake audio spread rapidly before being exposed as fabrication |
This creates a double bind where neither belief nor disbelief in evidence can be rationally justified—and the effect will intensify as deepfake capabilities improve. According to a YouGov survey, 85% of Americans are "very" or "somewhat" concerned about misleading deepfakes.
Key Uncertainties
| Uncertainty | Range | Implications |
|---|---|---|
| Content authentication adoption rate | 10-60% of major platforms by 2027 | High adoption could restore verification; low adoption means continued erosion |
| AI detection keeping pace | 40-80% detection accuracy | Determines whether technical defenses remain viable |
| Trust recovery timeline | 10-30+ years | Shapes whether coordination for long-term risks is achievable |
| Generational divergence | 18-34: 59% AI trust vs. 55+: 18% (UK) | May resolve naturally or create permanent trust gap |
| Institutional reform success | Unknown | Trust rebuilding requires demonstrable competence over sustained period |
Crux Questions
-
Can content authentication scale? The C2PA standard provides a technical solution, but adoption requires coordination across platforms, media organizations, and hardware manufacturers. If adoption reaches critical mass (estimated 40-60% of content), the liar's dividend may shrink.
-
Will AI detection capabilities keep pace with generation? Currently, detection lags generation by 6-18 months. If this gap widens, technical verification becomes impossible; if it narrows, authentication systems become viable.
-
Does media literacy scale? Individual interventions show d=0.60 effect size, but effects decay over time (PNAS study). Requires recurring reinforcement rather than one-time training.
Sources
Trust Data
- Pew Research Center: Public Trust in Government↗🔗 web★★★★☆Pew Research CenterPew: 16% trust federal gov'tRelevant to AI safety discussions on governance and coordination: historically low public trust in government complicates efforts to build legitimate regulatory frameworks and collective action on AI risk.Pew Research Center's long-running survey tracking American public trust in the federal government, showing it has fallen to historically low levels—only 16% of Americans say th...governancepolicycoordinationinstitutional-trust+4Source ↗
- Pew Research Center: Public Trust 1958-2025
- Edelman Trust Barometer↗🔗 web★★★☆☆Edelman2024 Edelman Trust BarometerRelevant to AI governance discussions as a large-scale empirical measure of public trust in institutions managing transformative technologies like AI; useful for understanding the social legitimacy challenges facing AI deployment and regulation.The 2024 Edelman Trust Barometer surveyed global populations to reveal a paradox where rapid innovation—including AI—risks deepening societal distrust and political polarization...governancepolicyai-safetydeployment+4Source ↗
- 2025 Edelman Trust Barometer: AI Flash Poll
- Gallup: Trust in Government Depends on Party Control
Liar's Dividend Research
- Chesney & Citron: Deep Fakes—A Looming Challenge↗🔗 webChesney & Citron (2019)A foundational legal scholarship piece frequently cited in AI governance and policy discussions around synthetic media; relevant to AI safety communities concerned with misuse, deception, and the erosion of epistemic trust in the information ecosystem.Chesney and Citron's seminal 2019 law review article examines the emerging threat of deepfake technology to privacy, democratic discourse, and national security. The paper analy...governancepolicydeploymentai-safety+5Source ↗
- Schiff, Schiff & Bueno: The Liar's Dividend (APSR 2025)
- Brennan Center: Deepfakes, Elections, and Shrinking the Liar's Dividend
AI Misinformation
- Reuters Institute: AI and Misinformation Trust Conference 2024
- Carnegie Endowment: Can Democracy Survive AI?
- Generative AI and Misinformation: Scoping Review (AI & Society 2025)
Interventions
- Media Literacy Meta-Analysis (Huang et al. 2024)
- PNAS: Digital Media Literacy Intervention
- C2PA: Coalition for Content Provenance and Authenticity
References
The 2024 Edelman Trust Barometer surveyed global populations to reveal a paradox where rapid innovation—including AI—risks deepening societal distrust and political polarization rather than delivering prosperity. Key findings show that innovation acceptance is declining due to perceived political interference in science, weak institutional governance, and poor communication from scientific institutions. Business is marginally the most trusted institution to introduce innovations, but still falls below the threshold of full public trust.
This Brennan Center essay examines how public awareness of AI deepfakes paradoxically enables bad actors to falsely disclaim authentic content as fake—a dynamic called the 'liar's dividend.' It analyzes politicians' incentives to exploit this confusion and proposes countermeasures including provenance verification technology, media literacy, and anti-deception norms to preserve democratic epistemic foundations.
The Reuters Institute Digital News Report 2023 presents findings from a YouGov survey of over 93,000 online news consumers across 46 markets, documenting shifts in digital news consumption. Key findings include declining trust and interest in news, the growing influence of video-based platforms like TikTok and YouTube (especially in the Global South), and the waning influence of Facebook.
Gallup's long-running survey tracking American public confidence across major institutions including government, military, media, science, and business. Provides longitudinal data on trust trends, revealing broad erosion of institutional credibility over decades. Serves as a key empirical reference for understanding social trust dynamics relevant to governance and coordination challenges.
Chesney and Citron's seminal 2019 law review article examines the emerging threat of deepfake technology to privacy, democratic discourse, and national security. The paper analyzes how AI-generated synthetic media undermines trust in audiovisual evidence and proposes legal and technical countermeasures. It is widely cited as a foundational work in the legal and policy literature on synthetic media.
Pew Research Center's long-running survey tracking American public trust in the federal government, showing it has fallen to historically low levels—only 16% of Americans say they trust the government in Washington to do the right thing always or most of the time as of 2024. The data spans over six decades, contextualizing current distrust within broader political and social trends. This collapse in institutional trust has significant implications for collective action, governance effectiveness, and democratic legitimacy.
This 2024 APSR study examines how the widespread awareness of deepfakes and misinformation enables politicians to falsely deny authentic evidence of their misconduct by claiming it is fabricated. The research demonstrates that this 'liar's dividend' undermines democratic accountability, as voters become uncertain whether real evidence is genuine or AI-generated.
A Pew Research Center survey documenting declining public trust in scientists, medical professionals, and other institutions in the United States, with a pronounced and widening partisan gap between Republicans and Democrats. The report highlights how trust erosion varies significantly by political affiliation, education, and demographic group, with implications for science-based policymaking and public health.
Brookings Institution researchers examine how AI-generated disinformation, deepfakes, and algorithmic amplification on platforms like TikTok could influence the 2024 elections. A key focus is the 'liar's dividend'—the phenomenon where the mere existence of convincing deepfake technology allows bad actors to dismiss authentic evidence as fabricated. The piece analyzes both direct manipulation risks and the subtler epistemic harms of eroding trust in genuine media.
A Gallup poll from October 2025 reporting that American public trust in mass media has reached a new historic low. This survey tracks longitudinal trends in media credibility and public confidence in news institutions, with implications for how information — including about AI and emerging technologies — is received and processed by the public.
The C2PA is an industry coalition that has developed an open technical standard for attaching verifiable provenance metadata to digital content, functioning like a 'nutrition label' that tracks a file's origin, creation tools, and edit history. This standard aims to help consumers and platforms distinguish authentic content from manipulated or AI-generated media. It is backed by major technology and media companies including Adobe, Microsoft, and the BBC.
This Carnegie Endowment analysis examines how AI threatens democratic governance through disinformation, surveillance, and power concentration, while exploring whether democratic institutions can adapt to manage AI's destabilizing effects. It assesses the risk that AI accelerates authoritarian consolidation and erodes checks and balances that protect democratic norms.