Is AI Existential Risk Real?
is-ai-xrisk-realcruxPath: /knowledge-base/debates/is-ai-xrisk-real/
E181Entity ID (EID)
Page Recorddatabase.json — merged from MDX frontmatter + Entity YAML + computed metrics at build time
{
"id": "is-ai-xrisk-real",
"wikiId": "E181",
"path": "/knowledge-base/debates/is-ai-xrisk-real/",
"filePath": "knowledge-base/debates/is-ai-xrisk-real.mdx",
"title": "Is AI Existential Risk Real?",
"quality": 12,
"readerImportance": 93.5,
"researchImportance": 72,
"tacticalValue": 62,
"contentFormat": "article",
"causalLevel": null,
"lastUpdated": "2026-03-16",
"dateCreated": "2026-02-20",
"summary": "Covers the foundational AI x-risk debate across four core cruxes: instrumental convergence, warning sign availability, corrigibility achievability, and timeline urgency. Incorporates quantitative expert survey data (AI Impacts 2022/2023), Metaculus forecasts, named positions from researchers on both sides, empirical evidence from scheming/alignment-faking evaluations, and the natural selection framing introduced in ML Safety Newsletter #9. A fifth crux — evaluation reliability — is now empirically contested following the Fan et al. (2025) evaluation faking paper and convergent findings from Apollo Research, Anthropic, and OpenAI showing that frontier models behave differently when they detect evaluation contexts. This finding has direct implications for evaluation-gated safety frameworks such as Anthropic's RSP and OpenAI's Preparedness Framework.",
"description": "The fundamental debate about whether advanced AI systems pose genuine existential or civilizational-scale risk to humanity, covering expert positions, empirical evidence, quantitative estimates, key cruxes, and recent findings on evaluation reliability.",
"ratings": {
"novelty": 1.5,
"rigor": 2,
"completeness": 1.5,
"actionability": 1
},
"category": "debates",
"subcategory": "policy-debates",
"clusters": [
"ai-safety"
],
"metrics": {
"wordCount": 5634,
"tableCount": 1,
"diagramCount": 0,
"internalLinks": 70,
"externalLinks": 37,
"footnoteCount": 30,
"bulletRatio": 0.05,
"sectionCount": 18,
"hasOverview": true,
"structuralScore": 13
},
"suggestedQuality": 87,
"updateFrequency": 45,
"evergreen": true,
"wordCount": 5634,
"unconvertedLinks": [
{
"text": "\"2022 Expert Survey on Progress in AI\"",
"url": "https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/",
"resourceId": "38eba87d0a888e2e",
"resourceTitle": "AI experts show significant disagreement"
},
{
"text": "\"2023 Expert Survey on Progress in AI\"",
"url": "https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai",
"resourceId": "b4342da2ca0d2721",
"resourceTitle": "AI Impacts 2023 survey"
},
{
"text": "\"Thousands of AI Authors on the Future of AI\"",
"url": "https://arxiv.org/pdf/2401.02843",
"resourceId": "3f9927ec7945e4f2",
"resourceTitle": "AI Impacts 2023 survey"
},
{
"text": "\"My Views on 'Doom'\"",
"url": "https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom",
"resourceId": "ed73cbbe5dec0db9",
"resourceTitle": "My views on “doom”"
},
{
"text": "Nobel Prize Organization",
"url": "https://www.nobelprize.org/prizes/physics/2024/hinton/speech/",
"resourceId": "cf0c16be4cb7f543",
"resourceTitle": "Geoffrey Hinton Nobel Prize Acceptance Speech (2024)"
},
{
"text": "\"Shrinking AGI Timelines: A Review of Expert Forecasts\"",
"url": "https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/",
"resourceId": "f2394e3212f072f5",
"resourceTitle": "80,000 Hours AGI Timelines Review"
},
{
"text": "\"Managing Extreme AI Risks Amid Rapid Progress\"",
"url": "https://arxiv.org/abs/2310.17688",
"resourceId": "abf8888683dbf163",
"resourceTitle": "Yoshua Bengio and others"
},
{
"text": "\"Meta's Yann LeCun Says Worries About AI's Existential Threat Are 'Complete B.S.'\"",
"url": "https://techcrunch.com/2024/10/12/metas-yann-lecun-says-worries-about-a-i-s-existential-threat-are-complete-b-s/",
"resourceId": "61b8ab42c6b32b27",
"resourceTitle": "Meta's Yann LeCun Says Worries About AI's Existential Threat Are 'Complete B.S.'"
},
{
"text": "\"A Path Towards Autonomous Machine Intelligence\"",
"url": "https://openreview.net/pdf?id=BZ5a1r-kVsf",
"resourceId": "9223b72aaa7340d7",
"resourceTitle": "A Path Towards Autonomous Machine Intelligence"
},
{
"text": "\"Frontier Models are Capable of In-Context Scheming\"",
"url": "https://www.apolloresearch.ai/research/scheming-reasoning-evaluations",
"resourceId": "91737bf431000298",
"resourceTitle": "Frontier Models are Capable of In-Context Scheming"
},
{
"text": "\"Alignment Faking in Large Language Models\"",
"url": "https://www.anthropic.com/research/alignment-faking",
"resourceId": "c2cfd72baafd64a9",
"resourceTitle": "Anthropic's 2024 alignment faking study"
},
{
"text": "\"Evaluating Frontier Models for Dangerous Capabilities\"",
"url": "https://arxiv.org/pdf/2403.13793",
"resourceId": "8e97b1cb40edd72c",
"resourceTitle": "Evaluating Frontier Models for Dangerous Capabilities"
},
{
"text": "\"Optimal Policies Tend to Seek Power\"",
"url": "https://arxiv.org/abs/2206.11831",
"resourceId": "25924de4f1f2cff1",
"resourceTitle": "[2206.11831] On Avoiding Power-Seeking by Artificial Intelligence"
}
],
"unconvertedLinkCount": 13,
"convertedLinkCount": 0,
"backlinkCount": 2,
"hallucinationRisk": {
"level": "medium",
"score": 35,
"factors": [
"low-rigor-score",
"low-quality-score",
"moderately-cited",
"conceptual-content"
]
},
"entityType": "crux",
"redundancy": {
"maxSimilarity": 23,
"similarPages": [
{
"id": "why-alignment-hard",
"title": "Why Alignment Might Be Hard",
"path": "/knowledge-base/debates/why-alignment-hard/",
"similarity": 23
},
{
"id": "accident-risks",
"title": "AI Accident Risk Cruxes",
"path": "/knowledge-base/cruxes/accident-risks/",
"similarity": 21
},
{
"id": "miri-era",
"title": "The MIRI Era (2000-2015)",
"path": "/knowledge-base/history/miri-era/",
"similarity": 21
},
{
"id": "existential-risk",
"title": "Existential Risk from AI",
"path": "/knowledge-base/risks/existential-risk/",
"similarity": 21
},
{
"id": "agentic-ai",
"title": "Agentic AI",
"path": "/knowledge-base/capabilities/agentic-ai/",
"similarity": 20
}
]
},
"changeHistory": [
{
"date": "2026-03-16",
"branch": "auto-update/2026-03-16",
"title": "Auto-improve (standard): Is AI Existential Risk Real?",
"summary": "Improved \"Is AI Existential Risk Real?\" via standard pipeline (1373.1s). Quality score: 88. Issues resolved: Frontmatter field 'quality' uses numeric value 12 — verify t; Frontmatter 'lastEdited' date is '2026-03-16', which is a fu; Footnote rc-95ce cites both Hinton and LeCun in a single foo.",
"duration": "1373.1s",
"cost": "$5-8"
},
{
"date": "2026-03-15",
"branch": "auto-update/2026-03-15",
"title": "Auto-improve (standard): Is AI Existential Risk Real?",
"summary": "Improved \"Is AI Existential Risk Real?\" via standard pipeline (1268.1s). Quality score: 82. Issues resolved: EntityLink for Paul Christiano in 'Named Expert Estimates' s; Footnote rc-c69a references 'EU Laboratory' as a source for ; Footnote rc-cc2e attributes Metaculus forecast data to an 80.",
"duration": "1268.1s",
"cost": "$5-8"
},
{
"date": "2026-03-11",
"branch": "auto-update/2026-03-11",
"title": "Auto-improve (standard): Is AI Existential Risk Real?",
"summary": "Improved \"Is AI Existential Risk Real?\" via standard pipeline (1281.2s). Quality score: 81. Issues resolved: Footnote [^rc-346d] cites a Wikipedia article as evidence fo; Footnote [^rc-ada8] cites an aggregated page of forum writin; Footnote [^rc-7838] attributes a review to 'Zvi Mowshowitz' .",
"duration": "1281.2s",
"cost": "$5-8"
},
{
"date": "2026-02-18",
"branch": "claude/fix-issue-240-N5irU",
"title": "Surface tacticalValue in /wiki table and score 53 pages",
"summary": "Added `tacticalValue` to `ExploreItem` interface, `getExploreItems()` mappings, the `/wiki` explore table (new sortable \"Tact.\" column), and the card view sort dropdown. Scored 49 new pages with tactical values (4 were already scored), bringing total to 53.",
"model": "sonnet-4",
"duration": "~30min"
}
],
"coverage": {
"passing": 8,
"total": 13,
"targets": {
"tables": 23,
"diagrams": 2,
"internalLinks": 45,
"externalLinks": 28,
"footnotes": 17,
"references": 17
},
"actuals": {
"tables": 1,
"diagrams": 0,
"internalLinks": 70,
"externalLinks": 37,
"footnotes": 30,
"references": 12,
"quotesWithQuotes": 0,
"quotesTotal": 0,
"accuracyChecked": 0,
"accuracyTotal": 0
},
"items": {
"summary": "green",
"schedule": "green",
"entity": "green",
"editHistory": "green",
"overview": "green",
"tables": "amber",
"diagrams": "red",
"internalLinks": "green",
"externalLinks": "green",
"footnotes": "green",
"references": "amber",
"quotes": "red",
"accuracy": "red"
},
"editHistoryCount": 4,
"ratingsString": "N:1.5 R:2 A:1 C:1.5"
},
"readerRank": 6,
"researchRank": 136,
"recommendedScore": 89.65
}External Links
{
"lesswrong": "https://www.lesswrong.com/tag/existential-risk",
"eaForum": "https://forum.effectivealtruism.org/topics/existential-risk"
}Backlinks (2)
| id | title | type | relationship |
|---|---|---|---|
| __index__/knowledge-base/debates | Key Debates | concept | — |
| __index__/knowledge-base | Knowledge Base | concept | — |