Skip to content
Longterm Wiki
Back

Two types of AI existential risk (2025)

paper

Author

Atoosa Kasirzadeh

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Springer

2025 philosophy paper distinguishing between decisive (abrupt catastrophic) and accumulative (gradual erosive) pathways to AI existential risk, using complex systems analysis to reconcile competing theoretical frameworks in AI safety discourse.

Paper Details

Citations
22
Year
2025
Methodology
peer-reviewed
Categories
Philosophical Studies

Metadata

journal articleanalysis

Summary

This paper distinguishes between two pathways of AI existential risk: the conventional 'decisive' view, which focuses on abrupt catastrophic events from advanced AI systems (like superintelligence takeover), and an alternative 'accumulative' view, which posits that existential catastrophe could result from gradual, incremental AI-induced disruptions that erode systemic resilience over time. Using complex systems analysis, the author argues that the accumulative hypothesis can reconcile seemingly incompatible perspectives on AI risks and has important implications for AI governance and long-term safety strategies.

Cited by 1 page

PageTypeQuality
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202698 KB
Two types of AI existential risk: decisive and accumulative | Philosophical Studies | Springer Nature Link 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 

 

 
 
 
 
 
 

 

 
 
 
 
 

 

 

 

 
 
 
 
 
 

 
 
 
 

 

 

 
 

 

 

 

 

 

 
 
 
 
 
 

 
 
 
 
 

 
 
 
 

 

 
 
 
 

 
 
 
 
 
 

 
 
 

 
 
 
 

 

 
 
 Skip to main content 

 
 
 
 
 
 Advertisement

 
 
 
 
 
 
 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 
 

 Two types of AI existential risk: decisive and accumulative

 
 
 
 
 Open access 
 

 
 

 
 Published: 30 March 2025 
 

 
 
 
 Volume 182 , pages 1975–2003, ( 2025 )
 

 
 Cite this article 
 

 

 
 
 
 You have full access to this open access article

 
 
 
 
 
 
 
 Download PDF 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Save article 
 
 
 
 
 View saved research 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Philosophical Studies 
 
 
 
 Aims and scope
 
 
 
 
 
 Submit manuscript
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Two types of AI existential risk: decisive and accumulative
 
 
 
 
 
 
 
 
 Download PDF 
 
 
 
 
 
 

 
 
 
 
 
 

 
 
 

 

 
 Abstract

 The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This decisive view, however, often neglects the serious possibility of AI x-risk manifesting gradually through an incremental series of smaller yet interconnected disruptions, crossing critical thresholds over time. This paper contrasts the conventional decisive AI x-risk hypothesis with what I call an accumulative AI x-risk hypothesis . While the former envisions an overt AI takeover pathway, characterized by scenarios like uncontrollable superintelligence, the latter suggests a different pathway to existential catastrophes. This involves a gradual accumulation of AI-induced threats such as severe vulnerabilities and systemic erosion of critical economic and political structures. The accumulative hypothesis suggests a boiling frog scenario where incremental AI risks slowly undermine systemic and societal resilience until a triggering event results in irreversible collapse. Through complex systems analysis, this paper examines the distinct assumptions differentiating these two hypotheses. It is then argued that the accumulative view can reconcile seemingly incompatible perspectives on AI risks. The implications of differentiating between the two types of pathway—the decisive and th

... (truncated, 98 KB total)
Resource ID: df1935303ba9ba67 | Stable ID: YzU3NzRkNz