Comprehensive survey compilation showing AI concern rising rapidly (37%→50%, 2021-2025) with strong regulatory support (70-80%) but massive literacy gap (99% use AI, 39% aware). Only 12% mention existential risk unprompted despite 69% supporting development pause, suggesting concern focuses on near-term harms; trust declining across institutions with 45-point gap between China (72%) and US (32%).
Public Opinion & Awareness
Public Opinion
Comprehensive survey compilation showing AI concern rising rapidly (37%→50%, 2021-2025) with strong regulatory support (70-80%) but massive literacy gap (99% use AI, 39% aware). Only 12% mention existential risk unprompted despite 69% supporting development pause, suggesting concern focuses on near-term harms; trust declining across institutions with 45-point gap between China (72%) and US (32%).
Key Links
| Source | Link |
|---|---|
| Official Website | wikiedu.org |
| Wikipedia | en.wikipedia.org |
Overview
Public opinion shapes the political feasibility of AI governanceParameterAI GovernanceThis page contains only component imports with no actual content - it displays dynamically loaded data from an external source that cannot be evaluated. and influences the trajectory of AI development through democratic pressure, consumer behavior, and social norms. This page tracks key metrics on public awareness, concern, trust, and literacy regarding AI risks and capabilities.
Key Finding: While AI awareness is nearly universal (95%+ in the US), specific awareness of existential risk remains low (~12% mention it unprompted), though general concern is rising rapidly (from 37% in 2021 to 50% in 2025).
1. Population Aware of AI Existential Risk
Specific X-Risk Awareness
-
12.3% of US adults mention AI as a potential cause of human extinction (unprompted)
- Source: Rethink PrioritiesOrganizationRethink PrioritiesRethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across animal welfare, global health, and AI governance. Th...Quality: 60/100 survey, April 2023 (n=300)
- +78% increase from 6.9% in December 2022
- Growth attributed to ChatGPT media coverage surge
-
43% are very/somewhat concerned about AI causing the end of humanity
- Source: YouGov, June 2025
- 16% very concerned, 27% somewhat concerned
- Up from 37% in March 2025
-
59% of US adults support prioritizing mitigating extinction risk from AI
- Source: Rethink Priorities online poll, 2023
- 26% disagree
- Among those who disagree: 36% cite "other priorities," 23% say "not extinction," 18% say "not yet"
Ranking Among Existential Threats
- 4% select AI as the most likely cause of human extinction
- Ranks below nuclear war (42%), climate change, pandemics, and asteroids
- Source: Rethink Priorities nationally-representative survey
Expert vs. Public Gap
- AI researchers estimate median 5% chance of human extinction from AI
- Source: AI ImpactsOrganizationAI ImpactsAI Impacts is a research organization that conducts empirical analysis of AI timelines and risks through surveys and historical trend analysis, contributing valuable data to AI safety discourse. Wh...Quality: 53/100 survey of 2,778 researchers, 2023
- ~40% of researchers indicate >10% chance of catastrophic outcomes
- Unchanged from 2022 survey (same 5% median)
2. Population Concerned About AI Safety (General)
Overall Concern Trends (US)
-
50% of Americans are more concerned than excited about AI (2025)
- Source: Pew Research, survey of 5,023 adults, June 9-15, 2025
- +35% increase from 37% in 2021
- Only 11% are more excited than concerned
-
57% rate societal risks of AI as high
- Source: Pew Research, August 2024 (n=5,410)
- vs. 25% who rate benefits as high
-
47% believe AI effects on society will be negative
- Source: YouGov, June 2025
- +38% increase from 34% in December 2024
- Steady upward trend: 34% → 40% → 41% → 47% over 6 months
Trust in AI Systems
-
31% of Americans trust AI (2% fully, 29% somewhat)
- Source: Gallup, 2025 (n=3,128)
- 60% do not trust AI to make fair, unbiased decisions
- Only 8% consider themselves "very knowledgeable" about AI
-
Trust is declining: 25% say trust decreased in past year
- vs. 21% increased, 47% unchanged
- Source: YouGov, 2025
-
18% would trust AI to make decisions/take actions
- vs. 53% would not
- Even among Gen Z: 43% mistrustful vs. 26% trustful
Specific Concerns (US Public vs. AI Experts)
| Concern | Public | AI Experts |
|---|---|---|
| Less human connection | 57% | 37% |
| Job loss | 56% | 25% |
| More concerned than excited | 51% | 15% |
| Negative impact next 20 years | 83% | 44% |
Source: Pew Research, August 2024
Global Variation in Concern
Most concerned countries (>50% more concerned than excited):
- United States
- Italy
- Australia
- Brazil
- Greece
Least concerned:
- South Korea (16%)
- China (72% trust AI - highest globally)
Source: Pew Research Global Survey, October 2025
3. AI Risk Media Coverage
Article Volume
-
38,787 articles on AI from 12 major English newspapers (2010-2023)
- Source: Ittefaq et al. (2024), analysis of 12-country media coverage
- Dramatic increase in recent years, especially post-ChatGPT
-
24,827+ articles on generative AI specifically (Jan 2018 - Nov 2023)
- Source: Landscape of Generative AI in Global News (2024)
- Sharp spike after ChatGPT launch (November 2022)
Academic Publications on AI in Journalism
- 2023: 39 publications
- 2024: 106 publications (+172% year-over-year)
- 2020-2022: ~20-25 publications/year (plateau)
Source: Systematic bibliometric analysis (2016-2024)
Media Sentiment
- 21% negative coverage
- 13% positive coverage
- 66% neutral coverage
Source: 12-country analysis (2010-2023)
Trend in Critical Coverage
- UK and US media have become more critical over time
- Progressive outlets (NYT, The Guardian) "going slightly negative about AI each year"
- Increased references to risks and concerns
4. AI Safety Google Search Trends
General AI Search Interest
-
+250% year-over-year increase in "artificial intelligence" searches (UK, 2023)
- Source: Think with Google, 2023
-
Peak interest: October 2023 for "AI-powered search"
- Stabilized in first half of 2024
- Source: Statista, Google Trends data
Generative AI Specific
- Peak: Week ending March 3, 2024 (score: 100)
- Surge from mid-February to early March 2024
- Source: Statista, Google Trends
Search Behavior Shift
- Searches evolving from curiosity → practical application
- "How to AI" queries growing faster than basic informational queries
- Hands-on queries dominating over definitional searches
2025 Top Trending AI Searches
- #1 globally: "Gemini" (Google AI assistant)
- AI-generated content: Barbie AI, action figures, Ghibli-style art
- Indicates mainstream adoption of AI tools
Source: Google Year in Search 2025
5. Trust in AI Companies
Overall Company Trust
-
79% of Americans don't trust companies to use AI responsibly
- Source: Bentley-Gallup Survey, 2025
-
47% globally confident AI companies protect personal data (2024)
- Down from 50% in 2023 (-6%)
- Source: Ipsos, cited in Stanford AI Index 2025
Trust by Institution Type (US)
| Institution | Trust Level |
|---|---|
| Employers | 71% |
| Universities | 67% |
| Large tech companies | 61% |
| Start-ups | 51% |
Source: McKinsey US employee survey, Oct-Nov 2024
Global Trust Trends
-
Global average: Only 46% willing to trust AI systems
- Source: KPMG Global AI Trust Study (n=48,000 across 47 countries), 2025
-
Advanced economies: Trust drops to 39%
- Trust declining in wealthy nations, rising in emerging economies
Trust Metrics Declining (2022-2024)
- Perceived trustworthiness: 63% → 56% (-11%)
- Willingness to rely on AI: 52% → 43% (-17%)
- Worried about AI: 49% → 62% (+27%)
Source: Global trust surveys (2022-2024)
Regional Extremes
- Highest trust: China (72%), India, Nigeria
- Lowest trust: US (32%), Australia, Ireland, Netherlands (under 33%)
Source: Edelman Trust Barometer 2025
Experience Effect
- AI users: 46% trust AI
- Non-users: 23% trust AI
- Trust doubles with usage
Source: Gallup, 2025
6. Trust in AI Regulation
Government Regulatory Trust
- 62% of US public have little/no confidence in government to regulate AI
- Source: Pew Research, August 2024 (n=5,410)
- 53% of AI experts also lack confidence
Demand for Regulation
-
80% say government should maintain safety rules even if AI develops more slowly
- Source: Gallup/SCSP survey, 2025 (n=3,128)
- Only 9% prioritize speed over safety
- Bipartisan support: 88% Democrats, 79% Republicans/Independents
-
71% believe regulation is needed (41% "much more", 30% "somewhat more")
- Source: YouGov, 2025
- Up from 64% in December 2024 (+11%)
-
70% globally believe national/international AI regulation is needed
- Source: KPMG Global AI Trust Study, 2025
Perceived Regulatory Inadequacy
- 29% of US consumers believe current regulations are sufficient
- 72% say more regulation needed
- 81% would trust AI more if laws/policies were in place
Government Role Support
- ~50% of Americans agree government should have major role in AI oversight (end of 2024)
- Source: Ipsos/Stanford AI Index, 2024
7. Support for AI Development Pause
Public Support for 6-Month Pause (2023)
- 69% support a 6-month pause on some AI development
- 41% strongly support
- 28% somewhat support
- 13% oppose (4% strongly, 9% somewhat)
- 18% not sure
Source: YouGov poll, April 3, 2023 (n=20,810 US adults)
Support for Government-Enforced Moratorium
-
Average 73% support (Yes + Maybe)
- 39% definite "Yes" (average)
- Peak support after media intervention: 54% (CNBC survey), 54% (CNN survey)
-
Government enforcement: Average 69% support (Yes + Maybe)
- 35% definite "Yes"
- Peak: 52% (CNBC), 44% (CNN)
Source: EA Forum study, April 2023 (n=300, multiple survey conditions)
Context: Future of Life InstituteOrganizationFuture of Life Institute (FLI)Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause ...Quality: 46/100 Open Letter
- March 2023: Open letter calling for 6-month pause on systems >GPT-4
- 30,000+ signatures including Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100, Stuart RussellPersonStuart RussellStuart Russell is a UC Berkeley professor who founded CHAI in 2016 with $5.6M from Coefficient Giving (then Open Philanthropy) and authored 'Human Compatible' (2019), which proposes cooperative inv...Quality: 30/100, Elon MuskPersonElon Musk (AI Industry)Comprehensive profile of Elon Musk's role in AI, documenting his early safety warnings (2014-2017), OpenAI founding and contentious departure, xAI launch, and extensive track record of predictions....Quality: 38/100
- Impact: Generated "renewed urgency within governments," normalized expressing AI fears
- Reality: No pause occurred; investments in large models continued
8. AI Literacy Rate by Demographic
Self-Reported Understanding
- 67% globally say they have "good understanding" of AI (2024)
- Source: Ipsos AI Monitor, 32-country survey
By Generation (Global)
| Generation | Good Understanding |
|---|---|
| Gen Z | 72% |
| Millennials | 71% |
| Gen X | ≈60-65% (est.) |
| Baby Boomers | 58% |
Source: Ipsos AI Monitor 2024
Actual vs. Perceived Knowledge (US)
- 98% have heard about AI
- 39% report using AI
- BUT: When asked about 6 common AI products, 99% have used at least one
- 83% used 4+ AI products
- Indicates severe awareness gap about what counts as AI
Source: Gallup, 2025
Depth of Understanding (US)
-
34.13% highly familiar with AI
-
50.08% somewhat familiar
-
15.79% know nothing about AI
-
Among those "familiar":
- Only 13.73% actually understand AI processes
- 57.8% have some understanding
- 28.47% just know the term
Source: Survey of 800 Americans, 2024
University Students (4 Asian/African nations)
- Average AI literacy: 2.98 out of 5 (moderate skill level)
- Significant disparities by:
- Nationality
- Field of study (technical > non-technical)
- Academic degree level
- No significant difference by:
- Gender
- Age
Source: Comparative transnational survey (n=1,800), 2024
Gender Gap in Generative AI Use
- Persistent global gender gap in GenAI usage
- Male students show more trust in AI than female students (UK & Poland study, 2024)
- Younger adults (under 45) more open: 36% expect positive impact vs. 19% of 45+
Urban-Rural Divide
- Rural populations perceive higher privacy/safety risks
- Urban populations more accepting of AI
- Functional reliability concerns differ significantly
Student Preparedness
- 58% of students don't feel they have sufficient AI knowledge/skills
- 48% don't feel prepared for AI-enabled workplace
- Despite high usage rates
Source: DEC Global AI Student Survey 2024
9. Accuracy of Public Beliefs About AI Capabilities
Common Misconceptions
Top misconceptions identified (Survey of 800 Americans, 2024):
- "AI is fully autonomous and self-learning" (50% need clarification)
- "AI makes decisions without any errors" (50% need clarification)
Overestimation vs. Underestimation
- Public tends to both overestimate and underestimate AI capabilities
- Overestimate: Autonomy, reasoning, consciousness
- Underestimate: Current practical capabilities, scope of existing AI use
Understanding of AI Ubiquity
- Awareness gap: 39% report using AI, but 99% actually use AI-enabled products
- 61% unaware they use AI regularly
- Common products not recognized as AI:
- Navigation apps
- Streaming recommendation engines
- Social media algorithms
- Weather forecasting
- Online shopping personalization
Source: Gallup, 2025
Expert vs. Public Perception Gap
- Experts 3x more optimistic about AI impact (56% vs. 17%)
- Experts 4x more excited than public (47% vs. 11%)
- Largest perception gaps:
- Long-term societal impact
- Job market effects
- Loss of human connection
Source: Pew Research, August 2024
Misinformation Concerns
- 83.4% of Americans concerned about AI spreading misinformation in 2024 election
- Source: US public opinion survey, August 2023
Desire for Better Information
What Americans want to learn more about (2024 survey):
- 57.05%: Accuracy of AI-generated results
- 56.96%: Data security when using AI
- 48.22%: How AI makes decisions
Alignment of Beliefs with Reality
- Limited research specifically quantifying accuracy of public beliefs
- Most studies focus on awareness and attitudes, not correctness
- Significant need for systematic assessment of belief accuracy
Key Trends & Insights
1. Rapid Concern Growth (2021-2025)
- Concern increased 35% in 4 years (37% → 50%)
- Acceleration in 2024-2025: 34% → 47% in 6 months
- ChatGPT (Nov 2022) identified as major inflection point
2. High Pause Support but Low X-Risk Awareness
- 69% support development pause
- Only 12% mention existential risk unprompted
- Suggests concern is about near-term harms, not extinction
3. Erosion of Trust
- Trust in AI systems: declining
- Trust in companies: declining (50% → 47%)
- Trust in government to regulate: low (62% lack confidence)
- But: Experience builds trust (46% vs. 23%)
4. Massive Literacy Gap
- 99% use AI products
- 39% think they use AI
- 60-point awareness gap about everyday AI
5. Expert-Public Divergence
- 3x gap in optimism about long-term impact
- 4x gap in excitement vs. concern
- Suggests communication challenge for AI safety advocates
6. Global Variation
- Emerging markets: High trust, high optimism (China 72% trust)
- Advanced economies: Low trust, high concern (US 32% trust)
- 45-point trust gap between China and US
7. Strong Support for Regulation
- 70-80% want government to prioritize safety over speed
- Bipartisan consensus (88% Dems, 79% Reps)
- But low confidence government can deliver (62% skeptical)
Measurement Challenges
1. Question Framing Effects
- "AI" vs. "artificial intelligence" vs. "machine learning" elicits different responses
- "Existential risk" vs. "very bad outcomes" vs. "human extinction" varies widely
- Media exposure immediately before survey significantly affects responses
2. Awareness of AI Use
- People don't recognize AI in everyday products
- Self-reported usage dramatically underestimates actual usage
- Complicates measuring "literacy" vs. "awareness"
3. Temporal Volatility
- Opinions shift rapidly with news cycles
- ChatGPT caused immediate awareness spike
- Media interventions show immediate effect (EA Forum study)
4. Sample Representativeness
- Online panels vs. representative samples
- US-centric data (most surveys)
- Limited longitudinal tracking with consistent methodology
5. Correlation vs. Causation
- Does experience increase trust, or do trusting people seek experience?
- Does media coverage increase concern, or does concern drive coverage?
- Difficult to establish causal mechanisms
Data Sources
Primary Survey Organizations
- Pew Research Center↗🔗 web★★★★☆Pew Research CenterPew Research: Public and AI ExpertsA comprehensive study comparing perspectives of U.S. adults and AI experts on artificial intelligence's future, highlighting differences in optimism, job impacts, and regulatory...governanceeconomicSource ↗ - US public & AI experts (2024-2025)
- YouGov↗🔗 webYouGovA recent YouGov survey shows increasing American concerns about AI, with 43% worried about potential human extinction and 47% believing AI's societal effects will be negative.x-riskSource ↗ - US tracking surveys (2024-2025)
- Gallup↗🔗 web★★★★☆GallupGallupA 2025 Gallup survey shows Americans increasingly neutral about AI's impact, with 31% trusting businesses to use AI responsibly. Concerns persist about job market disruption.economicSource ↗ - Trust and awareness (2023-2025)
- Ipsos↗🔗 webIpsosA global survey exploring public perceptions of AI, finding people are simultaneously excited and apprehensive about AI's potential impact on society and work.Source ↗ - Global AI Monitor (30+ countries, 2024-2025)
- KPMG Global AI Trust Study↗🔗 webKPMG Global AI Trust StudyA comprehensive survey of 48,000 people across 47 countries explores public attitudes towards AI, highlighting rising adoption and critical trust challenges.Source ↗ - 47 countries, 48,000 respondents (2025)
Academic & Nonprofit
- Rethink Priorities↗🔗 webRethink PrioritiesA nationwide poll of 2,444 US adults examined public opinions on AI research pause, regulation, extinction risks, and potential societal impacts. The survey revealed nuanced pub...governancex-riskSource ↗ - AI policy & x-risk awareness
- AI Impacts↗📋 report★★★☆☆AI ImpactsAI ImpactsA comprehensive survey of 2,778 AI researchers explores predictions about AI milestone achievements and potential societal impacts. Researchers expressed both optimism and subst...Source ↗ - Expert surveys (2,778 researchers, 2023)
- Stanford HAI AI Index↗🔗 web★★★★☆Stanford HAIStanford HAI AI IndexA comprehensive global survey examining public perceptions of AI across 26 nations, tracking changes in attitudes towards AI's benefits, risks, and potential impacts on society ...Source ↗ - Comprehensive annual report
Media & Trends Analysis
- Google Trends↗🔗 webGoogle TrendsAnalysis of Google search trends shows increasing public curiosity about AI's practical applications across various fields like coding, writing, and image generation.Source ↗ - Search behavior
- Ittefaq et al. (2024)↗🔗 web★★★★☆ScienceDirect (peer-reviewed)Ittefaq et al. (2024)A comprehensive analysis of AI news coverage in 12 newspapers from 2010-2023 using topic modeling and sentiment analysis. The study reveals differences in AI framing between Glo...Source ↗ - 12-country media analysis (38,787 articles)
Related Metrics
- Expert OpinionAi Transition Model MetricExpert OpinionComprehensive analysis of expert beliefs on AI risk shows median 5-10% P(doom) but extreme disagreement (0.01-99% range), with AGI forecasts compressing from 50+ years (2020) to ~5 years (2024). De...Quality: 61/100 - AI researcher surveys, P(doom) estimates
- Governance & PolicyCruxAI Governance and PolicyComprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potential through coordinated international approaches. ...Quality: 66/100 - Regulatory responses to public opinion
- Lab BehaviorAi Transition Model MetricLab BehaviorComprehensive tracking of AI lab safety practices finds 53% average compliance with voluntary commitments, dramatic compression of safety evaluation timelines from months to days at OpenAI, and 25+...Quality: 55/100 - How public pressure affects AI companies
- Structural IndicatorsAi Transition Model MetricStructural IndicatorsComprehensive survey of 10 structural metrics for AI governance capacity, finding: 1-3 year policy lag times, 16-26 point elite-public trust gaps, moderate market concentration (HHI ~2,500), 2,000+...Quality: 57/100 - Information ecosystem quality
Last Updated
December 24, 2025
Note: This page synthesizes data from multiple surveys conducted 2023-2025. Survey methodologies, sample sizes, and question wordings vary significantly. Numbers should be interpreted as indicative trends rather than precise measurements. For specific use cases, consult original sources for methodology details.