Skip to content
Longterm Wiki
Back

Author

Writer

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

This is a public outreach and science communication effort rather than a technical contribution; useful as an example of how AI safety concepts are being communicated to general audiences via collaborative video content.

Forum Post Details

Karma
14
Comments
1
Forum
eaforum
Forum Tags
AI safetyExistential riskPublic communication on AI safetyVideo

Metadata

Importance: 38/100blog posteducational

Summary

An EA Forum post highlighting a collaborative video between Rational Animations and ControlAI that traces a speculative but plausible trajectory from current AI systems to recursively self-improving superintelligence, framing this as the 'default path' without alignment solutions. The video aims to communicate existential risk from advanced AI to a general audience by dramatizing capability acceleration and the unsolved alignment problem.

Key Points

  • Traces an extrapolated timeline from current chatbots to superintelligent AI capable of recursive self-improvement without human intervention.
  • Argues that AI systems redesigning themselves could lead to capability explosions, ultimately producing 'godlike' AI with extinction-level risk potential.
  • Frames uncontrolled superintelligence as the 'default path' if AI development continues without solving the alignment problem.
  • Emphasizes the asymmetry: humanity has made rapid progress in building powerful AI but has not solved how to keep advanced systems beneficial.
  • Serves as public-facing outreach content designed to make AI existential risk legible and compelling to non-specialist audiences.

Cited by 1 page

PageTypeQuality
ControlAIOrganization63.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202618 KB
RA x ControlAI video: What if AI just keeps getting smarter? — EA Forum 
 
 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents RA x ControlAI video: What if AI just keeps getting smarter? 

 by Writer May 2 2025 10 min read 1 14

 AI safety Existential risk Public communication on AI safety Video Frontpage RA x ControlAI video: What if AI just keeps getting smarter? Artificial Intelligence leads to Artificial General Intelligence Artificial General Intelligence leads to Recursive Self-Improvement Recursive Self-Improvement leads to Artificial Superintelligence ASI leads to godlike AI The Default Path 1 comment The video is about extrapolating the future of AI progress, following a timeline that starts from today’s chatbots to future AI that’s vastly smarter than all of humanity combined – with God-like capabilities. We argue that such AIs will pose a significant extinction risk to humanity. 

 This video came out of a partnership between Rational Animations and ControlAI. The script was written by Arthur Frost (one of Rational Animations’ writers) with Andrea Miotti as an adaptation of key points from The Compendium ( thecompendium.ai ), with extensive feedback and rounds of iteration from ControlAI. ControlAI is working to raise public awareness of AI extinction risk—moving the conversation forward to encourage governments to take action. 

 You can find the script of the video below .

 In 2023, Nobel Prize winners, top AI scientists, and even the CEOs of leading AI companies signed a statement which said “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

 But how do we go from ChatGPT to AIs that could kill everyone on earth? Why do so many scientists, CEOs, world leaders expect this?

 Let’s draw a line of AI capabilities over time. Back here in 2019 we have GPT2, which could answer short factual questions, translate simple phrases, and do small calculations. Then in 2022 we get models like GPT3.5, which can answer complex questions, tell stories, and write simple software. By 2025 we have models that can pass PhD-level exams, write entire applications independently, and perfectly emulate human voices. They’re beginning to substantially outperform average humans, and even experts. They still have weaknesses, of course, but the list of things AI  can’t do keeps getting shorter.

 What happens if we extend this line? Well, we’d see AIs become more and more capable until this crucial point here, where AIs can design and build new AI systems without human help. Then instead of progress coming from human researchers, we’d have AIs making better AIs, and the line would get a lot steeper. 

 If we keep going from there, we hit this point, where AIs are  superintelligent , better than humans at every intellectual task—better than all of humanity put together—sp

... (truncated, 18 KB total)
Resource ID: da13070054cfa061 | Stable ID: MWExZmVhMG