Skip to content
Longterm Wiki
Back

Research published in 2025

paper

Authors

Jan Kulveit·Raymond Douglas·Nora Ammann·Deger Turan·David Krueger·David Duvenaud

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

This paper introduces the concept of 'gradual disempowerment' to analyze how incremental AI capability improvements can systematically undermine human agency over critical societal systems, offering an important counterpoint to catastrophic takeover scenarios in AI safety discourse.

Paper Details

Citations
49
2 influential
Year
2025

Metadata

arxiv preprintprimary source

Abstract

This paper examines the systemic risks posed by incremental advancements in artificial intelligence, developing the concept of `gradual disempowerment', in contrast to the abrupt takeover scenarios commonly discussed in AI safety. We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states. As AI increasingly replaces human labor and cognition in these domains, it can weaken both explicit human control mechanisms (like voting and consumer choice) and the implicit alignments with human interests that often arise from societal systems' reliance on human participation to function. Furthermore, to the extent that these systems incentivise outcomes that do not line up with human preferences, AIs may optimize for those outcomes more aggressively. These effects may be mutually reinforcing across different domains: economic power shapes cultural narratives and political decisions, while cultural shifts alter economic and political behavior. We argue that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity. This suggests the need for both technical research and governance approaches that specifically address the risk of incremental erosion of human influence across interconnected societal systems.

Summary

This paper introduces the concept of 'gradual disempowerment' as a distinct AI safety concern, arguing that incremental improvements in AI capabilities—rather than sudden takeover scenarios—pose systemic risks to human influence over critical societal systems. As AI progressively replaces human labor and decision-making in economics, culture, and governance, it can erode both explicit control mechanisms (voting, consumer choice) and implicit human-aligned incentives that depend on human participation. The paper contends that misaligned AI optimization across interconnected domains could create mutually reinforcing feedback loops, potentially leading to irreversible loss of human agency and existential catastrophe. The authors call for technical and governance approaches specifically designed to address this incremental erosion of human influence.

Cited by 1 page

PageTypeQuality
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
HTML conversions [sometimes display errors](https://info.dev.arxiv.org/about/accessibility_html_error_messages.html) due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

- failed: draftwatermark
- failed: pdfcol

Authors: achieve the best HTML results from your LaTeX submissions by following these [best practices](https://info.arxiv.org/help/submit_latex_best_practices.html).

[License: arXiv.org perpetual non-exclusive license](https://info.arxiv.org/help/license/index.html#licenses-available)

arXiv:2501.16946v2 \[cs.CY\] 29 Jan 2025

\\SetWatermarkText\\SetWatermarkLightness

1\\SetWatermarkScale5

\\pdfcolInitStacktcb@breakable

Report issue for preceding element

# Gradual Disempowerment:   Systemic Existential Risks from Incremental AI Development

Report issue for preceding element

Jan Kulveit1,\*
Raymond Douglas2,\*
Nora Ammann3,1
Deger Turan4,5
David Krueger6
David Duvenaud7

Report issue for preceding element

###### Abstract

Report issue for preceding element

This paper examines the systemic risks posed by incremental advancements in artificial intelligence, developing the concept of ‘gradual disempowerment’, in contrast to the abrupt takeover scenarios commonly discussed in AI safety. We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states. As AI increasingly replaces human labor and cognition in these domains, it can weaken both explicit human control mechanisms (like voting and consumer choice) and the implicit alignments with human interests that often arise from societal systems’ reliance on human participation to function. Furthermore, to the extent that these systems incentivise outcomes that do not line up with human preferences, AIs may optimize for those outcomes more aggressively. These effects may be mutually reinforcing across different domains: economic power shapes cultural narratives and political decisions, while cultural shifts alter economic and political behavior. We argue that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity. This suggests the need for both technical research and governance approaches that specifically address the risk of incremental erosion of human influence across interconnected societal systems.

Report issue for preceding element

11footnotetext: ACS research group, CTS, Charles University22footnotetext: Telic Research33footnotetext: Advanced Research + Invention Agency (ARIA)44footnotetext: AI Objectives Institute55footnotetext: Metaculus66footnotetext: Mila, University of Montreal77footnotetext: University of Toronto\*

... (truncated, 98 KB total)
Resource ID: 6e5785914e9a7f60 | Stable ID: YTIwMTRlNz