Longterm Wiki
Updated 2026-01-28HistoryData
Page StatusContent
Edited 2 weeks ago1.0k words
33
QualityDraft
54
ImportanceUseful
6
Structure6/15
10202%33%
Updated every 6 weeksDue in 4 weeks
Summary

Comprehensive survey of AGI timeline predictions ranging from 2025-2027 (ultra-short) to never with current approaches, with median expert estimates around 2032-2037. Key cruxes include whether scaling alone suffices, data/compute limits, and trust in lab leader claims; wide uncertainty reflects deep disagreement about fundamental capabilities questions.

When Will AGI Arrive?

Crux

When Will AGI Arrive?

Comprehensive survey of AGI timeline predictions ranging from 2025-2027 (ultra-short) to never with current approaches, with median expert estimates around 2032-2037. Key cruxes include whether scaling alone suffices, data/compute limits, and trust in lab leader claims; wide uncertainty reflects deep disagreement about fundamental capabilities questions.

QuestionWhen will we develop artificial general intelligence?
RangeFrom 2-5 years to never with current approaches
StakesDetermines urgency of safety work and policy decisions
1k words

Key Links

SourceLink
Official Websitefirstmovers.ai
Wikipediaen.wikipedia.org
Crux

AGI Timeline Debate

QuestionWhen will we develop artificial general intelligence?
RangeFrom 2-5 years to never with current approaches
StakesDetermines urgency of safety work and policy decisions

Perhaps the most consequential forecasting question in history: When will we develop AI systems that match or exceed human-level intelligence across virtually all domains?

The answer determines how much time we have to solve alignment, whether to prioritize AI safety over other causes, and how urgently we need governance frameworks.

Defining AGI

The challenge: No consensus definition of AGI

Common criteria:

  • Can perform any intellectual task humans can
  • Can learn new tasks quickly with minimal data
  • Generalizes broadly across domains
  • Autonomous planning and goal-pursuit
  • Economic productivity matching human workers

Proxy metrics:

  • Pass rigorous expert-level tests across domains
  • Outperform median human on most economically valuable tasks
  • Can do the job of an AI researcher (recursive self-improvement)
  • $100B+ annual economic value

Note: Debate conflates different concepts:

  • Human-level AI (matches median human)
  • Transformative AI (drastically changes world)
  • Artificial General Intelligence (truly general intelligence)
  • Superintelligence (exceeds all humans)

Timeline Camps

Key Forecasts and Positions

AGI Timeline Predictions

When different people and organizations expect AGI

Sam Altman (OpenAI)short-timelines

Believes current paradigm can reach AGI. OpenAI organized around AGI by late 2020s.

Evidence: Public statements; OpenAI prep for AGI
AGI is achievable with current hardware and algorithms
Confidence: medium
Dario Amodei (Anthropic)short-timelines

Predicts 'powerful AI' by 2026-2027. Anthropic's urgency suggests short timelines.

Evidence: Essay on timelines; Scaling plans
2026-2027 for transformative AI if trends continue
Confidence: medium
Demis Hassabis (DeepMind)medium-timelines

Predicted 'within a decade' in 2023. DeepMind pursuing AGI but less aggressive timeline.

Evidence: Public statements
AGI within a decade
Confidence: medium
Yann LeCun (Meta)long-timelines

Believes current approaches won't reach AGI. Need new paradigms.

Evidence: Public statements; JEPA proposal
Decades away, not years. Need different architectures.
Confidence: high
Gary Marcusvery-long-never

Argues current approaches are fundamentally limited. AGI may never come from scaling.

Evidence: Writing; Public debates
Not close to AGI. Might never get there this way.
Confidence: high
Metaculus (Aggregate Forecast)medium-timelines

Community prediction median around 2032-2037, but very wide distribution.

Evidence: Forecasting platform
Median 2035, but 25th-75th percentile spans 2027-2050
Confidence: low
Ajeya Cotra (Coefficient Giving)medium-timelines

Bio-anchors framework suggests median 2040-2050, but shifted shorter recently.

Evidence: Technical report
Median 2040 in 2020 report, likely shorter now
Confidence: low

Key Cruxes

Key Questions

  • ?Will scaling current approaches reach AGI?
    Yes - scaling is all you need

    Scaling laws hold. Emergent capabilities appear. No fundamental blockers.

    Short timelines (2027-2032)

    Confidence: medium
    No - need new paradigms

    Missing key capabilities won't emerge from scaling. Need architectural innovation.

    Long timelines (2040+) or never

    Confidence: medium
  • ?Is the data wall real?
    Yes - hitting limits by 2026

    Finite internet text. Synthetic data degrades. Hard limit on scaling.

    Progress slows significantly, longer timelines

    Confidence: medium
    No - many ways around it

    Multimodal data, synthetic data, efficiency improvements, curriculum learning.

    Scaling can continue, shorter timelines possible

    Confidence: medium
  • ?How much do we trust lab leaders' timelines?
    Trust them - most informed

    They have non-public info about capabilities and scaling plans.

    Short timelines (2027-2030)

    Confidence: medium
    Discount hype - incentives matter

    Labs benefit from hype. History of overpromising. Insiders have blindspots.

    Longer timelines than labs claim

    Confidence: medium
  • ?Will progress continue exponentially?
    Yes - no limits in sight

    Compute growing, investment massive, no fundamental obstacles.

    Rapid progress, short timelines

    Confidence: low
    No - hitting diminishing returns

    Low-hanging fruit picked. Costs rising. Approaching limits.

    Slower progress, longer timelines

    Confidence: medium

What Would Update Timelines?

Evidence for shorter timelines:

  • GPT-5/6 showing qualitative leap in reasoning and planning
  • Successful scaling past data limits
  • AI substantially accelerating AI research
  • Solving ARC benchmark or similar generalization tests
  • Continued exponential capability gains

Evidence for longer timelines:

  • Scaling 100x with only incremental improvements
  • Hitting hard data or compute walls
  • Persistent failures on key capabilities despite scale
  • Need for architectural breakthroughs that don't arrive
  • Progress slowing on key benchmarks

Historical Track Record

Past AGI predictions:

  • 1958: "Machines will be capable, within twenty years, of doing any work that a man can do" - Herbert Simon
  • 1965: "Machines will be capable, within twenty years, of doing any work that a man can do" - Herbert Simon (updated)
  • 1970: "In from three to eight years we will have a machine with the general intelligence of an average human being" - Marvin Minsky
  • 1980s: Expert systems will lead to AGI by 2000
  • 2000s: AGI by 2020

Pattern: Always 20-30 years away. Should we believe this time is different?

Arguments it's different now:

  • Have empirical scaling laws, not just speculation
  • Concrete progress on benchmarks and capabilities
  • Massive investment and resources
  • Clear path forward (scaling) vs unknown unknowns

Arguments it's the same:

  • Still don't understand intelligence
  • Benchmarks may not capture true intelligence
  • Economic and technical obstacles remain
  • Same overconfidence as past predictions

The Distribution Shape

Most forecasters have heavy-tailed distributions:

Short tail (optimistic):

  • 5-10% chance: AGI by 2027
  • 20-25% chance: AGI by 2030
  • Driven by: Scaling working, rapid progress, no blockers

Central mass:

  • 50% chance: AGI by 2035-2040
  • Most likely scenario: Continued progress with some obstacles

Long tail (pessimistic):

  • 20-30% chance: AGI after 2050
  • 5-10% chance: Never with current paradigms
  • Driven by: Fundamental limits, need for new paradigms

Wide uncertainty is rational given:

  • Deep uncertainty about scaling limits
  • Unknown unknowns
  • Dependence on definition
  • Historical poor track record

Implications for Different Timelines

If AGI by 2027-2030:

  • Extremely urgent to solve alignment NOW
  • Current safety research may be too slow
  • Need immediate governance action
  • Race dynamics critical concern
  • May not get warning signs

If AGI by 2030-2040:

  • Time to iterate on safety
  • Can learn from weaker systems
  • Governance frameworks can develop
  • Safety research can mature
  • More coordination opportunities

If AGI after 2050:

  • Safety research can be thorough
  • Governance can be careful and democratic
  • Current hype may be overblown
  • Other causes may be higher priority
  • Different paradigms may emerge

Economic vs Philosophical AGI

Important distinction often blurred:

Economically transformative AI:

  • Automates most jobs
  • Generates trillions in value
  • Fundamentally changes society
  • Might come soon (2027-2035)
  • Doesn't require "general" intelligence

Philosophically general intelligence:

  • True understanding across all domains
  • Quick learning like humans
  • Causal reasoning and abstraction
  • Might require paradigm shifts
  • Could be much further (2040+)

Why it matters:

  • Economic transformation could happen without "AGI"
  • Most impacts come from economic transformation
  • But existential risk might require true AGI
  • Definitions determine timeline estimates

The Compute Bottleneck

Different views on compute as limiting factor:

Optimistic: Compute is abundant

  • Moore's law continues
  • Efficiency improvements ongoing
  • Cloud compute scales easily
  • No physical limits near

Pessimistic: Compute limits soon

  • Training costs becoming prohibitive ($1B+)
  • Energy and chip constraints
  • Economic feasibility limits
  • Can't scale 1000x more

Resolution matters:

  • If compute limits: Longer timelines, regulated by economics
  • If compute abundant: Timelines depend on algorithmic progress

The China Factor

How does China affect timelines?

Arguments China accelerates:

  • Competition drives urgency
  • Massive investment
  • Less safety caution
  • Different approaches might work

Arguments China doesn't change much:

  • US still ahead on capabilities
  • Chinese models lag 1-2 years
  • Limited to similar approaches
  • Compute restrictions bite

Strategic implications:

  • If China racing: Pressure for short timelines
  • If US leads comfortably: Can afford to be cautious
  • Matters for regulation and safety investment

Recursive Self-Improvement

Wild card: AI accelerating its own development

If happens soon:

  • Could dramatically shorten timelines
  • "Singularity" scenario
  • Hard to predict outcomes
  • Very fast takeoff possible

If doesn't happen:

  • Progress continues at current pace
  • More time to prepare
  • Gradual development allows adjustment

Current status:

  • AI assists with coding and research
  • But not yet transformative acceleration
  • Unclear if/when recursive improvement kicks in

Base Rates and Reference Classes

What should we compare to?

Reference class: Major technologies

  • Electricity: 50 years from invention to transformation
  • Computers: 40 years from invention to ubiquity
  • Internet: 20 years from invention to transformation
  • Suggests: Long timelines (decades)

Reference class: Exponential technologies

  • Semiconductors: Exponential for 50+ years
  • Genomics: Exponential progress continues
  • Suggests: Continued rapid progress possible

Reference class: Breakthroughs

  • Manhattan Project: 3 years when focused
  • Apollo Program: 8 years with resources
  • Suggests: Massive resources can compress timelines

Problem: AGI is unique, unclear which reference class applies

Related Pages

Top Related Pages

Safety Research

Scalable Oversight

Key Debates

Open vs Closed Source AIGovernment Regulation vs Industry Self-GovernanceIs Interpretability Sufficient for Safety?Should We Pause AI Development?Is AI Existential Risk Real?

Labs

Google DeepMindOpenAI

Organizations

Alignment Research Center

Concepts

SuperintelligenceLong-Timelines Technical Worldview