Longterm Wiki
Updated 2026-01-28HistoryData
Page StatusContent
Edited 2 weeks ago2.6k words2 backlinks
52
QualityAdequate
52
ImportanceUseful
9
Structure9/15
402102%63%
Updated every 6 weeksDue in 4 weeks
Summary

Comprehensive survey compilation showing AI concern rising rapidly (37%→50%, 2021-2025) with strong regulatory support (70-80%) but massive literacy gap (99% use AI, 39% aware). Only 12% mention existential risk unprompted despite 69% supporting development pause, suggesting concern focuses on near-term harms; trust declining across institutions with 45-point gap between China (72%) and US (32%).

Public Opinion & Awareness

Entry

Public Opinion

Comprehensive survey compilation showing AI concern rising rapidly (37%→50%, 2021-2025) with strong regulatory support (70-80%) but massive literacy gap (99% use AI, 39% aware). Only 12% mention existential risk unprompted despite 69% supporting development pause, suggesting concern focuses on near-term harms; trust declining across institutions with 45-point gap between China (72%) and US (32%).

Related
ai-transition-model-parameters
Societal TrustPreference Authenticity
2.6k words · 2 backlinks

Key Links

SourceLink
Official Websitewikiedu.org
Wikipediaen.wikipedia.org

Overview

Public opinion shapes the political feasibility of AI governance and influences the trajectory of AI development through democratic pressure, consumer behavior, and social norms. This page tracks key metrics on public awareness, concern, trust, and literacy regarding AI risks and capabilities.

Key Finding: While AI awareness is nearly universal (95%+ in the US), specific awareness of existential risk remains low (~12% mention it unprompted), though general concern is rising rapidly (from 37% in 2021 to 50% in 2025).


1. Population Aware of AI Existential Risk

Specific X-Risk Awareness

  • 12.3% of US adults mention AI as a potential cause of human extinction (unprompted)

    • Source: Rethink Priorities survey, April 2023 (n=300)
    • +78% increase from 6.9% in December 2022
    • Growth attributed to ChatGPT media coverage surge
  • 43% are very/somewhat concerned about AI causing the end of humanity

    • Source: YouGov, June 2025
    • 16% very concerned, 27% somewhat concerned
    • Up from 37% in March 2025
  • 59% of US adults support prioritizing mitigating extinction risk from AI

    • Source: Rethink Priorities online poll, 2023
    • 26% disagree
    • Among those who disagree: 36% cite "other priorities," 23% say "not extinction," 18% say "not yet"

Ranking Among Existential Threats

  • 4% select AI as the most likely cause of human extinction
    • Ranks below nuclear war (42%), climate change, pandemics, and asteroids
    • Source: Rethink Priorities nationally-representative survey

Expert vs. Public Gap

  • AI researchers estimate median 5% chance of human extinction from AI
    • Source: AI Impacts survey of 2,778 researchers, 2023
    • ~40% of researchers indicate >10% chance of catastrophic outcomes
    • Unchanged from 2022 survey (same 5% median)

2. Population Concerned About AI Safety (General)

Overall Concern Trends (US)

  • 50% of Americans are more concerned than excited about AI (2025)

    • Source: Pew Research, survey of 5,023 adults, June 9-15, 2025
    • +35% increase from 37% in 2021
    • Only 11% are more excited than concerned
  • 57% rate societal risks of AI as high

    • Source: Pew Research, August 2024 (n=5,410)
    • vs. 25% who rate benefits as high
  • 47% believe AI effects on society will be negative

    • Source: YouGov, June 2025
    • +38% increase from 34% in December 2024
    • Steady upward trend: 34% → 40% → 41% → 47% over 6 months

Trust in AI Systems

  • 31% of Americans trust AI (2% fully, 29% somewhat)

    • Source: Gallup, 2025 (n=3,128)
    • 60% do not trust AI to make fair, unbiased decisions
    • Only 8% consider themselves "very knowledgeable" about AI
  • Trust is declining: 25% say trust decreased in past year

    • vs. 21% increased, 47% unchanged
    • Source: YouGov, 2025
  • 18% would trust AI to make decisions/take actions

    • vs. 53% would not
    • Even among Gen Z: 43% mistrustful vs. 26% trustful

Specific Concerns (US Public vs. AI Experts)

ConcernPublicAI Experts
Less human connection57%37%
Job loss56%25%
More concerned than excited51%15%
Negative impact next 20 years83%44%

Source: Pew Research, August 2024

Global Variation in Concern

Most concerned countries (>50% more concerned than excited):

  • United States
  • Italy
  • Australia
  • Brazil
  • Greece

Least concerned:

  • South Korea (16%)
  • China (72% trust AI - highest globally)

Source: Pew Research Global Survey, October 2025


3. AI Risk Media Coverage

Article Volume

  • 38,787 articles on AI from 12 major English newspapers (2010-2023)

    • Source: Ittefaq et al. (2024), analysis of 12-country media coverage
    • Dramatic increase in recent years, especially post-ChatGPT
  • 24,827+ articles on generative AI specifically (Jan 2018 - Nov 2023)

    • Source: Landscape of Generative AI in Global News (2024)
    • Sharp spike after ChatGPT launch (November 2022)

Academic Publications on AI in Journalism

  • 2023: 39 publications
  • 2024: 106 publications (+172% year-over-year)
  • 2020-2022: ~20-25 publications/year (plateau)

Source: Systematic bibliometric analysis (2016-2024)

Media Sentiment

  • 21% negative coverage
  • 13% positive coverage
  • 66% neutral coverage

Source: 12-country analysis (2010-2023)

Trend in Critical Coverage

  • UK and US media have become more critical over time
  • Progressive outlets (NYT, The Guardian) "going slightly negative about AI each year"
  • Increased references to risks and concerns

4. AI Safety Google Search Trends

General AI Search Interest

  • +250% year-over-year increase in "artificial intelligence" searches (UK, 2023)

    • Source: Think with Google, 2023
  • Peak interest: October 2023 for "AI-powered search"

    • Stabilized in first half of 2024
    • Source: Statista, Google Trends data

Generative AI Specific

  • Peak: Week ending March 3, 2024 (score: 100)
    • Surge from mid-February to early March 2024
    • Source: Statista, Google Trends

Search Behavior Shift

  • Searches evolving from curiositypractical application
  • "How to AI" queries growing faster than basic informational queries
  • Hands-on queries dominating over definitional searches

2025 Top Trending AI Searches

  • #1 globally: "Gemini" (Google AI assistant)
  • AI-generated content: Barbie AI, action figures, Ghibli-style art
  • Indicates mainstream adoption of AI tools

Source: Google Year in Search 2025


5. Trust in AI Companies

Overall Company Trust

  • 79% of Americans don't trust companies to use AI responsibly

    • Source: Bentley-Gallup Survey, 2025
  • 47% globally confident AI companies protect personal data (2024)

    • Down from 50% in 2023 (-6%)
    • Source: Ipsos, cited in Stanford AI Index 2025

Trust by Institution Type (US)

InstitutionTrust Level
Employers71%
Universities67%
Large tech companies61%
Start-ups51%

Source: McKinsey US employee survey, Oct-Nov 2024

Global Trust Trends

  • Global average: Only 46% willing to trust AI systems

    • Source: KPMG Global AI Trust Study (n=48,000 across 47 countries), 2025
  • Advanced economies: Trust drops to 39%

    • Trust declining in wealthy nations, rising in emerging economies

Trust Metrics Declining (2022-2024)

  • Perceived trustworthiness: 63% → 56% (-11%)
  • Willingness to rely on AI: 52% → 43% (-17%)
  • Worried about AI: 49% → 62% (+27%)

Source: Global trust surveys (2022-2024)

Regional Extremes

  • Highest trust: China (72%), India, Nigeria
  • Lowest trust: US (32%), Australia, Ireland, Netherlands (under 33%)

Source: Edelman Trust Barometer 2025

Experience Effect

  • AI users: 46% trust AI
  • Non-users: 23% trust AI
  • Trust doubles with usage

Source: Gallup, 2025


6. Trust in AI Regulation

Government Regulatory Trust

  • 62% of US public have little/no confidence in government to regulate AI
    • Source: Pew Research, August 2024 (n=5,410)
    • 53% of AI experts also lack confidence

Demand for Regulation

  • 80% say government should maintain safety rules even if AI develops more slowly

    • Source: Gallup/SCSP survey, 2025 (n=3,128)
    • Only 9% prioritize speed over safety
    • Bipartisan support: 88% Democrats, 79% Republicans/Independents
  • 71% believe regulation is needed (41% "much more", 30% "somewhat more")

    • Source: YouGov, 2025
    • Up from 64% in December 2024 (+11%)
  • 70% globally believe national/international AI regulation is needed

    • Source: KPMG Global AI Trust Study, 2025

Perceived Regulatory Inadequacy

  • 29% of US consumers believe current regulations are sufficient
    • 72% say more regulation needed
    • 81% would trust AI more if laws/policies were in place

Government Role Support

  • ~50% of Americans agree government should have major role in AI oversight (end of 2024)
    • Source: Ipsos/Stanford AI Index, 2024

7. Support for AI Development Pause

Public Support for 6-Month Pause (2023)

  • 69% support a 6-month pause on some AI development
    • 41% strongly support
    • 28% somewhat support
    • 13% oppose (4% strongly, 9% somewhat)
    • 18% not sure

Source: YouGov poll, April 3, 2023 (n=20,810 US adults)

Support for Government-Enforced Moratorium

  • Average 73% support (Yes + Maybe)

    • 39% definite "Yes" (average)
    • Peak support after media intervention: 54% (CNBC survey), 54% (CNN survey)
  • Government enforcement: Average 69% support (Yes + Maybe)

    • 35% definite "Yes"
    • Peak: 52% (CNBC), 44% (CNN)

Source: EA Forum study, April 2023 (n=300, multiple survey conditions)

Context: Future of Life Institute Open Letter

  • March 2023: Open letter calling for 6-month pause on systems >GPT-4
  • 30,000+ signatures including Yoshua Bengio, Stuart Russell, Elon Musk
  • Impact: Generated "renewed urgency within governments," normalized expressing AI fears
  • Reality: No pause occurred; investments in large models continued

8. AI Literacy Rate by Demographic

Self-Reported Understanding

  • 67% globally say they have "good understanding" of AI (2024)
    • Source: Ipsos AI Monitor, 32-country survey

By Generation (Global)

GenerationGood Understanding
Gen Z72%
Millennials71%
Gen X≈60-65% (est.)
Baby Boomers58%

Source: Ipsos AI Monitor 2024

Actual vs. Perceived Knowledge (US)

  • 98% have heard about AI
  • 39% report using AI
  • BUT: When asked about 6 common AI products, 99% have used at least one
    • 83% used 4+ AI products
    • Indicates severe awareness gap about what counts as AI

Source: Gallup, 2025

Depth of Understanding (US)

  • 34.13% highly familiar with AI

  • 50.08% somewhat familiar

  • 15.79% know nothing about AI

  • Among those "familiar":

    • Only 13.73% actually understand AI processes
    • 57.8% have some understanding
    • 28.47% just know the term

Source: Survey of 800 Americans, 2024

University Students (4 Asian/African nations)

  • Average AI literacy: 2.98 out of 5 (moderate skill level)
  • Significant disparities by:
    • Nationality
    • Field of study (technical > non-technical)
    • Academic degree level
  • No significant difference by:
    • Gender
    • Age

Source: Comparative transnational survey (n=1,800), 2024

Gender Gap in Generative AI Use

  • Persistent global gender gap in GenAI usage
  • Male students show more trust in AI than female students (UK & Poland study, 2024)
  • Younger adults (under 45) more open: 36% expect positive impact vs. 19% of 45+

Urban-Rural Divide

  • Rural populations perceive higher privacy/safety risks
  • Urban populations more accepting of AI
  • Functional reliability concerns differ significantly

Student Preparedness

  • 58% of students don't feel they have sufficient AI knowledge/skills
  • 48% don't feel prepared for AI-enabled workplace
  • Despite high usage rates

Source: DEC Global AI Student Survey 2024


9. Accuracy of Public Beliefs About AI Capabilities

Common Misconceptions

Top misconceptions identified (Survey of 800 Americans, 2024):

  1. "AI is fully autonomous and self-learning" (50% need clarification)
  2. "AI makes decisions without any errors" (50% need clarification)

Overestimation vs. Underestimation

  • Public tends to both overestimate and underestimate AI capabilities
  • Overestimate: Autonomy, reasoning, consciousness
  • Underestimate: Current practical capabilities, scope of existing AI use

Understanding of AI Ubiquity

  • Awareness gap: 39% report using AI, but 99% actually use AI-enabled products
  • 61% unaware they use AI regularly
  • Common products not recognized as AI:
    • Navigation apps
    • Streaming recommendation engines
    • Social media algorithms
    • Weather forecasting
    • Online shopping personalization

Source: Gallup, 2025

Expert vs. Public Perception Gap

  • Experts 3x more optimistic about AI impact (56% vs. 17%)
  • Experts 4x more excited than public (47% vs. 11%)
  • Largest perception gaps:
    • Long-term societal impact
    • Job market effects
    • Loss of human connection

Source: Pew Research, August 2024

Misinformation Concerns

  • 83.4% of Americans concerned about AI spreading misinformation in 2024 election
    • Source: US public opinion survey, August 2023

Desire for Better Information

What Americans want to learn more about (2024 survey):

  1. 57.05%: Accuracy of AI-generated results
  2. 56.96%: Data security when using AI
  3. 48.22%: How AI makes decisions

Alignment of Beliefs with Reality

  • Limited research specifically quantifying accuracy of public beliefs
  • Most studies focus on awareness and attitudes, not correctness
  • Significant need for systematic assessment of belief accuracy

Key Trends & Insights

1. Rapid Concern Growth (2021-2025)

  • Concern increased 35% in 4 years (37% → 50%)
  • Acceleration in 2024-2025: 34% → 47% in 6 months
  • ChatGPT (Nov 2022) identified as major inflection point

2. High Pause Support but Low X-Risk Awareness

  • 69% support development pause
  • Only 12% mention existential risk unprompted
  • Suggests concern is about near-term harms, not extinction

3. Erosion of Trust

  • Trust in AI systems: declining
  • Trust in companies: declining (50% → 47%)
  • Trust in government to regulate: low (62% lack confidence)
  • But: Experience builds trust (46% vs. 23%)

4. Massive Literacy Gap

  • 99% use AI products
  • 39% think they use AI
  • 60-point awareness gap about everyday AI

5. Expert-Public Divergence

  • 3x gap in optimism about long-term impact
  • 4x gap in excitement vs. concern
  • Suggests communication challenge for AI safety advocates

6. Global Variation

  • Emerging markets: High trust, high optimism (China 72% trust)
  • Advanced economies: Low trust, high concern (US 32% trust)
  • 45-point trust gap between China and US

7. Strong Support for Regulation

  • 70-80% want government to prioritize safety over speed
  • Bipartisan consensus (88% Dems, 79% Reps)
  • But low confidence government can deliver (62% skeptical)

Measurement Challenges

1. Question Framing Effects

  • "AI" vs. "artificial intelligence" vs. "machine learning" elicits different responses
  • "Existential risk" vs. "very bad outcomes" vs. "human extinction" varies widely
  • Media exposure immediately before survey significantly affects responses

2. Awareness of AI Use

  • People don't recognize AI in everyday products
  • Self-reported usage dramatically underestimates actual usage
  • Complicates measuring "literacy" vs. "awareness"

3. Temporal Volatility

  • Opinions shift rapidly with news cycles
  • ChatGPT caused immediate awareness spike
  • Media interventions show immediate effect (EA Forum study)

4. Sample Representativeness

  • Online panels vs. representative samples
  • US-centric data (most surveys)
  • Limited longitudinal tracking with consistent methodology

5. Correlation vs. Causation

  • Does experience increase trust, or do trusting people seek experience?
  • Does media coverage increase concern, or does concern drive coverage?
  • Difficult to establish causal mechanisms

Data Sources

Primary Survey Organizations

  • Pew Research Center - US public & AI experts (2024-2025)
  • YouGov - US tracking surveys (2024-2025)
  • Gallup - Trust and awareness (2023-2025)
  • Ipsos - Global AI Monitor (30+ countries, 2024-2025)
  • KPMG Global AI Trust Study - 47 countries, 48,000 respondents (2025)

Academic & Nonprofit

  • Rethink Priorities - AI policy & x-risk awareness
  • AI Impacts - Expert surveys (2,778 researchers, 2023)
  • Stanford HAI AI Index - Comprehensive annual report

Media & Trends Analysis

  • Google Trends - Search behavior
  • Ittefaq et al. (2024) - 12-country media analysis (38,787 articles)

Related Metrics

  • Expert Opinion - AI researcher surveys, P(doom) estimates
  • Governance & Policy - Regulatory responses to public opinion
  • Lab Behavior - How public pressure affects AI companies
  • Structural Indicators - Information ecosystem quality

Last Updated

December 24, 2025

Note: This page synthesizes data from multiple surveys conducted 2023-2025. Survey methodologies, sample sizes, and question wordings vary significantly. Numbers should be interpreted as indicative trends rather than precise measurements. For specific use cases, consult original sources for methodology details.

Related Pages

Top Related Pages

Concepts

Future of Life Institute (FLI)AI GovernanceAI ImpactsYoshua BengioElon Musk (AI Industry)Expert Opinion

Models

Trust Erosion Dynamics ModelTrust Cascade Failure ModelDeepfakes Authentication Crisis Model

Approaches

AI-Era Epistemic SecurityAI Content Authentication

Risks

Automation Bias (AI Systems)Epistemic Collapse

Transition Model

Economic & LaborStructural Indicators

Organizations

AI Impacts