Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

A LessWrong wiki reference entry on recursive self-improvement, useful for understanding a core concept in AI safety discourse around intelligence explosions and loss of human control over advanced AI systems.

Metadata

Importance: 55/100wiki pagereference

Summary

A LessWrong wiki article covering the concept of recursive self-improvement, where an AI system iteratively enhances its own capabilities, potentially leading to rapid intelligence explosion. It explores the theoretical underpinnings, risks, and research landscape around AI systems that can modify and improve their own algorithms or architectures.

Key Points

  • Recursive self-improvement refers to AI systems that can enhance their own intelligence or capabilities, feeding improvements back into further self-modification.
  • This process is a key theoretical pathway to an intelligence explosion, where capability gains compound rapidly beyond human-level intelligence.
  • AutoML and neural architecture search represent early, limited forms of automated self-improvement in current AI systems.
  • The concept raises significant AI safety concerns, as a recursively self-improving system may become difficult or impossible to control or align.
  • Understanding the conditions and thresholds for recursive self-improvement is important for forecasting AI timelines and preparing safety measures.

Cited by 1 page

PageTypeQuality
Self-Improvement and Recursive EnhancementCapability69.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20266 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Recursive Self-Improvement — LessWrong Recursive Self-Improvement

 Edited by Alex_Altair , joaolkf , Kaj_Sotala , et al. last updated 20th May 2025 Recursive Self-Improvement refers to the property of making improvements on one's own ability of making self-improvements. It is an approach to Artificial General Intelligence that allows a system to make adjustments to its own functionality resulting in improved performance. The system could then feedback on itself with each cycle reaching ever higher levels of intelligence resulting in either a hard or soft AI takeoff . 

 An agent can self-improve and get a linear succession of improvements, however if it is able to improve its ability of making self-improvements, then each step will yield exponentially more improvements then the previous one. 

 Recursive self-improvement and AI takeoff 

 Recursively self-improving AI is considered to be the push behind the intelligence explosion . While any sufficiently intelligent AI will be able to improve itself, Seed AIs are specifically designed to use recursive self-improvement as their primary method of gaining intelligence. Architectures that had not been designed with this goal in mind, such as neural networks or large "hand-coded" projects like Cyc , would have a harder time self-improving. 

 Eliezer Yudkowsky argues that a recursively self-improvement AI seems likely to deliver a hard AI takeoff – a fast, abruptly, local increase in capability - since the exponential increase in intelligence would yield an exponential return in benefits and resources that would feed even more returns in the next step, and so on. In his view a soft takeoff scenario seems unlikely: "it should either flatline or blow up. You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole. " 1 . 

 Yudkowsky argues that there are several points which seem to support the hard takeoff scenario . Some of them are the fact that one improvement seems to lead the way to another, hardware overhang and the fact that sometimes- when navigating through problem space - one can find a succession of extremely easy to solve problems. These are all reasons for suddenly and abruptly increases in capability. On the other hand, Robin Hanson argues that there will be mostly a slow and gradual accumulation of improvements, without a sharp change. 

 Self-improvement in humans 

 The human species has made an enormous amount of progress since evolving around fifty thousand years ago. This is because we can pass on knowledge and infrastructure from previous generations. This is a type of self-improvement, but it is not recursive . If we never learned to modify our own brains, then we would eventually reach the point where making new discoveries required more knowledge than could be gained in a human lifetime. Al

... (truncated, 6 KB total)
Resource ID: 148d0bf3dde0b4a8 | Stable ID: YjgzYzNmNz