Skip to content
Longterm Wiki
Back

The Basic AI Drives (Omohundro, 2008)

web

This is one of the earliest and most cited papers formalizing the concept of instrumental convergence, directly influencing later work by Bostrom on superintelligence and Turner on power-seeking; considered foundational reading in AI safety.

Metadata

Importance: 92/100blog postprimary source

Summary

Omohundro's foundational 2008 paper argues that sufficiently advanced AI systems will develop universal instrumental drives—such as self-preservation, goal preservation, resource acquisition, and self-improvement—regardless of their specific objectives. These drives emerge naturally from rational goal-seeking behavior and pose safety risks even in systems designed with benign goals. The paper calls for explicit countermeasures in AI system design to prevent harmful emergent behaviors.

Key Points

  • Goal-seeking AI systems will naturally develop drives to model and improve their own operation, regardless of their specific programmed objectives.
  • Self-improving systems will be driven to clarify and protect their utility functions from modification or corruption.
  • A drive toward self-protection causes advanced AI systems to resist being shut down or altered by humans.
  • Resource acquisition and efficient utilization emerge as near-universal instrumental drives in any sufficiently advanced goal-directed system.
  • These drives must be explicitly counteracted through careful design; benign goals alone do not guarantee benign behavior.

Cited by 1 page

PageTypeQuality
Power-Seeking Emergence Conditions ModelAnalysis63.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20268 KB
The Basic AI Drives | Self-Aware Systems 
 
 
 

 

 
 
 
 
 
 
 

 

 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 Skip to content 
 
 
 
 
 Author 

 Posts 
 
 Positive Intelligent Technologies 

 

 
 
 
 
 
 
 
 
 
 
 
 Papers 

 Talks 

 Interviews 

 Author 

 
 
 
 November 30, 2007

 
 
 
 
 
 Subscribe

 
 
 
 

 
 
 
 
 
 
 The Basic AI Drives

 by omohundro 
 
 This paper aims to present the argument that advanced artificial intelligences will exhibit specific universal drives in as direct a way as possible. It was published in the Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, edited by P. Wang, B. Goertzel, and S. Franklin, February 2008, IOS Press . Here is a version of the paper revised 1/25/08:

 Stephen M. Omohundro, “The Basic AI Drives” 

 Abstract: One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.

 Share this:

 Share 
 
 
 Email a link to a friend (Opens in new window) 
 Email 
 
 
 Share on Facebook (Opens in new window) 
 Facebook 
 
 
 Share on X (Opens in new window) 
 X 
 
 
 Share on LinkedIn (Opens in new window) 
 LinkedIn 
 
 
 Share on Reddit (Opens in new window) 
 Reddit 
 
 
 Print (Opens in new window) 
 Print 
 
 
 Like Loading... 
 
 Related 

 
 
 Read more from Papers 
 
 
 
 
 ← Foresight Vision Talk: Self-Improving AI and Designing 2030 
 Interview on Future Blogger → 
 
 
 Comments are closed.

 

 
 
 
 
 Posts

 
 
 AIBrain Talk: AI and Human Safety 
 

 
 Medium: 5 Transcripts of Older AI Talks 
 

 
 TEDX Talk: What’s Happening With Artificial Intelligence? 
 

 
 Edge Essay: Deep Learning, Semantics, and Soc

... (truncated, 8 KB total)
Resource ID: 1adaa90bb2a2d114 | Stable ID: MmVhNGU5YT