Skip to content
Longterm Wiki
Back

Is Power-Seeking AI an Existential Risk?

paper

Author

Joseph Carlsmith

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Examines the core argument that power-seeking behavior in advanced AI systems poses existential risks, analyzing how misaligned superintelligent agents would have instrumental incentives to gain control over humans.

Paper Details

Citations
1
12 influential
Year
2025
Methodology
book-chapter
Categories
Essays on Longtermism

Metadata

arxiv preprintprimary source

Abstract

This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire -- especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe by 2070.

Summary

This report examines the core argument for existential risk from misaligned AI by presenting two main components: first, a backdrop picture establishing that intelligent agency is an extremely powerful force and that creating superintelligent agents poses significant risks, particularly because misaligned agents would have instrumental incentives to seek power over humans; second, a detailed six-premise argument evaluating whether creating such agents would lead to existential catastrophe by 2070. The work provides a structured analysis of why power-seeking behavior in advanced AI systems represents a fundamental existential concern.

Cited by 5 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
# Is Power-Seeking AI an Existential Risk?

Joseph Carlsmith

Open Philanthropy

April 2021

[Video presentation](https://forum.effectivealtruism.org/posts/ChuABPEXmRumcJY57/video-and-transcript-of-presentation-on-existential-risk "") \| [Slides](https://docs.google.com/presentation/d/1UE_cAsogrK5i9wvF3YMIZX-iO9qzjevnrYfTxlKL7ns/ "") \| [Audio version](https://open.spotify.com/episode/0tEsrtllG2hJvOByQW2ydH?si=1a81d15eacb64105 "")

###### Abstract

This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intelligent than us is playing with fire – especially given that if their objectives are problematic, such agents would plausibly have instrumental incentives to seek power over humans. Second, I formulate and evaluate a more specific six-premise argument that creating agents of this kind will lead to existential catastrophe by 2070. On this argument, by 2070: (1) it will become possible and financially feasible to build relevantly powerful and agentic AI systems; (2) there will be strong incentives to do so; (3) it will be much harder to build aligned (and relevantly powerful/agentic) AI systems than to build misaligned (and relevantly powerful/agentic) AI systems that are still superficially attractive to deploy; (4) some such misaligned systems will seek power over humans in high-impact ways; (5) this problem will scale to the full disempowerment of humanity; and (6) such disempowerment will constitute an existential catastrophe. I assign rough subjective credences to the premises in this argument, and I end up with an overall estimate of ~5% that an existential catastrophe of this kind will occur by 2070. _(May 2022 update: since making this report public in April 2021, my estimate here has gone up, and is now at >10%.)_

## 1 Introduction

Some worry that the development of advanced artificial intelligence will result in existential catastrophe—that is, the destruction of humanity’s longterm potential.111See e.g. [Yudkowsky (2008)](https://intelligence.org/files/AIPosNegFactor.pdf ""), [Bostrom (2014)](https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742 ""), [Hawking (2014)](https://www.bbc.com/news/technology-30290540 ""), [Tegmark (2017)](https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598 ""), [Christiano (2019)](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like ""), [Russell (2019)](https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1619197644&sr=1-1 ""), [Ord (2020)](https://www.amazon.com/Precipice-Existential-Risk-Future-Humanity/dp/031648492X/ref=sr_1_2?crid=2ZWCCI74ZFX55&dchild=1&keywords=precipice+existential+risk+and+

... (truncated, 98 KB total)
Resource ID: 6e597a4dc1f6f860 | Stable ID: NDU5ODk3NT