Skip to content
Longterm Wiki
Updated 2026-01-29HistoryData
Page StatusContent
Edited 2 months ago2.0k words28 backlinksUpdated every 3 weeksOverdue by 45 days
59QualityAdequate •55.5ImportanceUseful92.5ResearchCritical
Content10/13
SummaryScheduleEntityEdit history1Overview
Tables16/ ~8Diagrams1/ ~1Int. links46/ ~16Ext. links23/ ~10Footnotes0/ ~6References19/ ~6Quotes0Accuracy0RatingsN:4.2 R:6.8 A:5.5 C:7.5Backlinks28
Change History1
Batch content fixes + stale-facts validator + 2 new validation rules#9246 weeks ago

(fill in)

claude-sonnet-4-6

Issues3
QualityRated 59 but structure suggests 100 (underrated by 41 points)
Links14 links could use <R> components
StaleLast edited 66 days ago - may need review

AGI Timeline

Concept

AGI Timeline

Comprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current predictions clustering around 2027-2045 median (50% probability). Aggregates 9,300+ predictions across expert surveys, prediction markets, and lab leader statements, documenting key uncertainties around scaling limits, definitions, and technical bottlenecks.

Related
Approaches
Prediction Markets (AI Forecasting)
People
Sam AltmanDario Amodei
Organizations
MetaculusAI Impacts
2k words · 28 backlinks

Quick Assessment

DimensionAssessmentEvidence
Median Expert Forecast (2026)2040-2047 (50% HLMI)AI Impacts 2023 Survey found 50% probability of HLMI by 2047, down 13 years from 2022
Prediction Markets2027-2031 medianMetaculus forecasters predict median of November 2027 (1,700+ forecasters)
Lab Leader Estimates2026-2029Sam Altman, Dario Amodei, and Demis Hassabis converge on late 2020s
Timeline TrendRapidly shorteningExpert median dropped from 2061 (2018) → 2059 (2022) → 2047 (2023); Metaculus dropped from 50 years to 5 years since 2020
Uncertainty RangeVery high (±15-20 years)80% confidence intervals span 2026-2045+ across forecasts
Definition SensitivityHighDifferent AGI definitions shift predictions by 10-20 years
Confidence LevelLow-MediumExpert surveys show framing effects of 15+ years; historical predictions consistently too pessimistic
SourceLink
Related Resourcetimelines.issarice.com
Wikipediaen.wikipedia.org

Overview

AGI timeline predictions represent attempts to forecast when artificial intelligence will match or exceed human cognitive abilities across all domains. Current expert consensus suggests a 50% probability of AGI development between 2040-2050, though estimates vary widely based on AGI definitions and measurement criteria.

Recent surveys show accelerating timelines compared to historical predictions. The 2023 AI Impacts survey found median expert predictions of 2045 for "High-Level Machine Intelligence," while Metaculus prediction markets aggregate to approximately 2040-2045. However, significant uncertainty remains around capability thresholds, measurement methodologies, and potential discontinuous progress.

AGI Timeline Factors

Diagram (loading…)
flowchart TD
  subgraph DRIVERS["Accelerating Factors"]
      COMPUTE[Compute Scaling<br/>10x per 18 months]
      ALGO[Algorithmic Improvements<br/>2x efficiency per year]
      INVEST[Investment Surge<br/>$100B+ training clusters by 2027]
      DATA[Data Availability<br/>Synthetic and multimodal]
  end

  subgraph BRAKES["Decelerating Factors"]
      SCALING[Scaling Limits<br/>Diminishing returns]
      ENERGY[Energy Constraints<br/>Data center capacity]
      REGULATION[Regulatory Barriers<br/>AI governance]
      SAFETY[Safety Research<br/>Alignment requirements]
  end

  subgraph UNCERTAINTY["Key Uncertainties"]
      PARADIGM[Paradigm Shifts<br/>New architectures needed?]
      DEFINITION[Definition Disputes<br/>What counts as AGI?]
      MEASURE[Measurement<br/>How to evaluate AGI?]
  end

  DRIVERS --> TIMELINE[AGI Timeline<br/>Forecast]
  BRAKES --> TIMELINE
  UNCERTAINTY --> TIMELINE

  TIMELINE --> SHORT[Optimistic: 2026-2029]
  TIMELINE --> MED[Median: 2035-2045]
  TIMELINE --> LONG[Conservative: 2050+]

  style SHORT fill:#ffcccc
  style MED fill:#ffffcc
  style LONG fill:#ccffcc
  style TIMELINE fill:#e6f3ff

AGI Timeline Risk Assessment

FactorAssessmentTimeline ImpactSource
Expert Survey Median2040-2050Baseline estimateAI Impacts 2023
Prediction Market Aggregate2040-2045Market consensusMetaculus
Lab Leader Statements2025-2035Optimistic boundOpenAI, DeepMind
Scaling Limitations2050+Conservative boundEpoch AI

Expert Survey Results

Recent Survey Data (2023-2026)

SurveyYearSample SizeMedian AGI TimelineKey FindingSource
AI Impacts ESPAI20232,778 experts2047 (HLMI)13-year drop from 2060 in 2022AI Impacts
Digital Minds Survey202567 experts2050 (50% probability)20% by 2030, 40% by 2040Digital Minds Report
AI Multiple Meta-Analysis20269,300 predictions2040 (aggregated)Synthesized all public forecastsAI Multiple
Metaculus Community20261,700+ forecastersNov 2027 median80% CI: July 2026 - Feb 2031Metaculus
Samotsvety Superforecasters202315 forecasters28% by 2030Professional forecasters more conservative80,000 Hours

Expert timelines have consistently shortened over the past decade, with dramatic acceleration since 2022:

YearExpert Median (HLMI)Metaculus MedianChange from Previous
201820612070+Baseline
20222059-20602055-2 years
20232045-20472040-13 to -15 years
2024≈20402035-5 years
2025≈20352030-5 years
2026VariedNov 2027-3 years

The 80,000 Hours analysis notes that "in four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to five years." Historical expert predictions have consistently been too pessimistic—in 2022, researchers thought AI wouldn't write simple Python code until ~2027, but AI met that threshold by 2023-2024.

Leading AI researchers increasingly cite rapid scaling of language models and emergent capabilities as evidence for shorter timelines.

Prediction Market Analysis

Metaculus Aggregates (January 2026)

QuestionCurrent PredictionConfidence IntervalForecastersSource
First General AI AnnouncedNov 30, 2027 medianJuly 2026 - Feb 2031 (80%)1,700+Metaculus
Weakly General AINov 2033Dec 2028 - Sep 20451,800+Metaculus
Transformative AI2031 median2027-2045 (80%)1,000+AGI Dashboard
AGI by 2030≈40% probability25-55% rangeAggregatedMarket consensus
AGI by 2040≈75% probability60-85% rangeAggregatedMarket consensus

Platform Comparison

PlatformAGI Median50% Probability YearKey Difference
MetaculusMid-20302030-2031Stricter definition requiring robotics
Manifold2028≈50% before 2028More aggressive, market-based
Polymarket2029-2030≈45% by 2029Real-money incentives
Expert Surveys2040-20472040-2045Academic conservatism

Market Dynamics

Prediction markets show several notable patterns:

  • Dramatic shortening: Metaculus dropped from 50 years to 5 years median since 2020
  • Volatility spikes following major capability announcements (GPT-4, Claude 3, o1, o3)
  • Shorter timelines in technical communities vs. academic surveys (10-15 year gap)
  • Definition sensitivity with different AGI operationalizations varying by 10-20 years

Lab Leader Statements

Industry Timeline Claims (Updated January 2026)

OrganizationLeaderClaimed TimelineKey StatementSource
OpenAISam Altman2025-2028"We are now confident we know how to build AGI"; 2026 models will "amaze us"Sam Altman Blog
AnthropicDario Amodei2026-2027"AI may surpass humans in most tasks by 2027"; "rapidly running out of convincing blockers"Lex Fridman Interview
DeepMindDemis Hassabis"Within this decade" (by 2030)"I'd bet on achieving what you might call AGI within the next few years"Nature interview 2024
DeepMindShane Legg50% by 2028"Minimal AGI" prediction (January 2026)DeepMind cofounder
MetaYann LeCun"Many decades away"Skeptical of current paradigm reaching AGIPublic statements 2024
xAIElon Musk2026AI "smarter than any single human"Public statements

Implied Timelines from Investment Plans

Several labs' public roadmaps suggest aggressive acceleration:

Metric2024202520262027Source
Training Run Cost≈$100M≈$1B$10B+$100B clustersDario Amodei
Compute per TrainingBaseline3-10x30-100x300-1000xScaling projections
Data Center Power100-500 MW500 MW-1 GW1-5 GW5-10 GWIndustry reports
Researcher FTEs5,000+10,000+20,000+50,000+Lab hiring plans

Key Uncertainty Factors

Definition Problems

AGI DefinitionTimeline RangeKey Challenge
Human-level performance2030-2040Benchmark gaming
Economic substitution2040-2060Deployment lags
Scientific breakthrough2035-2050Discovery vs. automation
Consciousness/sentience2050+Hard problem of consciousness

Technical Bottlenecks

Current limitations that may extend timelines:

  • Reasoning capabilities: Current models struggle with complex multi-step reasoning
  • Long-horizon planning: Limited ability for extended autonomous operation
  • Robustness: Brittleness to distribution shifts and adversarial examples
  • Sample efficiency: Still require massive training data compared to humans

Scaling Constraints

Constraint TypeImpact on TimelineMitigation Strategies
Compute hardware+5-10 years if hits limitsAdvanced chip architectures
Data availability+3-5 yearsSynthetic data generation
Energy requirements+2-5 yearsEfficiency improvements
Regulatory barriers+5-15 yearsInternational coordination

Current Capability Trajectory

2024 State Assessment

Recent capabilities suggest accelerating progress toward AGI:

  • Multi-modal integration: Vision, text, and code in single models
  • Tool use: Effective API calls and workflow automation
  • Emergent reasoning: Chain-of-thought and constitutional approaches
  • Scientific research: Automated hypothesis generation and testing

Projection Methods

Approach2030 PredictionMethodologyLimitations
Scaling laws85% human performanceExtrapolate compute trendsMay hit diminishing returns
Expert elicitation60% probabilitySurvey aggregationBias and overconfidence
Benchmark tracking90% on specific tasksPerformance trajectoryNarrow evaluation
Economic modeling40% job automationLabor substitutionDeployment friction

Disagreement and Cruxes

Major Points of Contention

Timeline Pessimists (2050+) argue:

  • Current paradigms (transformers, scaling) will hit fundamental limits
  • Alignment difficulty will require extensive safety research before deployment
  • Economic and regulatory barriers will slow deployment
  • Key cognitive capabilities (long-horizon planning, true reasoning) may require architectural breakthroughs

Timeline Optimists (2025-2035) contend:

  • Scaling laws will continue with current paradigms through 2030+
  • Emergent capabilities from larger models will bridge remaining capability gaps
  • Competitive pressure and $100B+ investments will accelerate development
  • Recent progress (o1, o3 reasoning, agents) shows faster-than-expected capability gains

Key Cruxes

QuestionImpact on TimelineCurrent EvidenceOptimist ViewPessimist View
Will scaling laws continue?±10 yearsMixed signals since GPT-4Compute scaling to $100B clusters will unlock new capabilitiesDiminishing returns visible; new paradigms needed
Can transformers achieve AGI?±15-20 yearsChain-of-thought, o1/o3 reasoningArchitecture is sufficient with scaleFundamental limits on reasoning and planning
How hard is alignment?±10-15 yearsConstitutional AI, RLHF improvementsTractable with current approachesRequires deep unsolved problems
Will regulation slow progress?±5-15 yearsEU AI Act, compute governanceLight touch will prevailPrecautionary regulation inevitable
Is AGI a single threshold?±10 yearsDefinitional debatesContinuous capability improvementDiscrete capability jumps required

Timeline Implications

Strategic Considerations

Different timelines imply varying urgency for:

  • Safety research: Shorter timelines require immediate focus on alignment solutions
  • Governance frameworks: International coordination becomes critical
  • Economic preparation: Labor market disruption planning
  • Coordination mechanisms: Preventing dangerous racing dynamics

Policy Relevance

Timeline uncertainty affects regulation approaches:

  • Precautionary principle: Plan for shortest reasonable timelines
  • Adaptive governance: Build flexible frameworks for multiple scenarios
  • Research prioritization: Balance capability and safety advancement

Sources & Resources

Primary Research

CategorySourceKey Contribution
Expert SurveysAI Impacts 2023 SurveyLargest expert survey (2,778 respondents)
Prediction MarketsMetaculus AGI QuestionsContinuous probability tracking (1,700+ forecasters)
Technical AnalysisEpoch AI Scaling ReportsCompute and training cost projections
Industry PerspectivesOpenAI Planning DocumentsLab development roadmaps
Meta-Analysis80,000 Hours Timeline ReviewSynthesis of forecaster disagreements

2025-2026 Key Sources

SourceDateKey FindingURL
Sam Altman "Gentle Singularity"Jan 2025"We know how to build AGI"; 2026 will see "systems that figure out novel insights"Blog
Dario Amodei Lex Fridman InterviewNov 2024"Rapidly running out of convincing blockers"; 2026-2027 possibleTranscript
AI Multiple Meta-AnalysisJan 20269,300 predictions analyzed; aggregated median ≈2040Analysis
Digital Minds Forecasting202567 experts: 20% by 2030, 50% by 2050Report
AGI Timelines DashboardJan 2026Combined forecasts: 2031 median (80% CI: 2027-2045)Dashboard

Forecasting Organizations

OrganizationFocus AreaKey Resources
AI ImpactsExpert surveys and trend analysisAnnual ESPAI survey reports
MetaculusPrediction marketsAGI timeline questions, AGI Horizons tournament
Epoch AICompute trends and scaling lawsTechnical reports, training cost projections
Future of Humanity InstituteLong-term forecastingAcademic papers (now closed)
Samotsvety ForecastingSuperforecaster aggregationAGI probability estimates
  • Scaling debates: See scaling law discussion
  • Capability analysis: Review core capabilities development
  • Timeline uncertainty: Explore forecasting methodology
  • Risk implications: Consider takeoff dynamics scenarios

References

OpenAI is a leading AI research and deployment company focused on building advanced AI systems, including GPT and o-series models, with a stated mission of ensuring artificial general intelligence (AGI) benefits all of humanity. The homepage serves as a gateway to their research, products, and policy work spanning capabilities and safety.

★★★★☆

Google DeepMind is a leading AI research laboratory combining the former DeepMind and Google Brain teams, focused on developing advanced AI systems and conducting research across capabilities, safety, and applications. The organization is one of the most influential labs in AI development, working on frontier models including Gemini and publishing widely-cited safety and capabilities research.

★★★★☆

Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progress. They produce empirical analyses and datasets to inform understanding of AI development trajectories and support better decision-making in AI governance and safety.

★★★★☆
4**Future of Humanity Institute**Future of Humanity Institute

The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.

★★★★☆

This URL points to Anthropic's published model specification (often called 'Claude's Constitution'), which outlines the values, principles, and behavioral guidelines that govern Claude's responses. However, the page appears to be returning a 404 error, suggesting the content has moved or is unavailable at this URL.

★★★★☆

Meta's official AI homepage showcases their broad research and product portfolio including Llama 4 (large language models), Segment Anything Model 3 (computer vision), V-JEPA 2 (world models), and AI glasses hardware. The company organizes its AI work around four research pillars: Communication & Language, Embodiment & Actions, Alignment, and Core Learning & Reasoning. Meta emphasizes open-source development and practical deployment at scale.

★★★★☆
7AI ImpactsAI Impacts

AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and existential risk arguments. It maintains a wiki and blog featuring expert surveys, historical analyses, and structured arguments about transformative AI development. Notable outputs include periodic expert surveys on AI progress timelines.

★★★☆☆
8Nature interview 2024Nature (peer-reviewed)·Paper

This is the homepage of Nature, a leading multidisciplinary scientific journal, displaying current news and research articles. The visible content includes stories on AI's influence on human expression, China's AI ambitions, and AI-driven memory shortages in labs, alongside biology and neuroscience research. No specific AI safety paper or interview is identifiable from the content provided.

★★★★★

A Metaculus forecasting question tracking community predictions on when artificial general intelligence will be achieved. Aggregates probabilistic estimates from forecasters worldwide, providing a crowd-sourced timeline estimate for AGI development.

★★★☆☆

Anthropic is an AI safety company focused on building reliable, interpretable, and steerable AI systems. The company conducts frontier AI research and develops Claude, its family of AI assistants, with a stated mission of responsible development and maintenance of advanced AI for long-term human benefit.

★★★★☆

Metaculus is a collaborative online forecasting platform where users make probabilistic predictions on future events across domains including AI development, biosecurity, and global catastrophic risks. It aggregates crowd wisdom and expert forecasts to produce calibrated probability estimates on complex questions relevant to long-term planning and existential risk assessment.

★★★☆☆

AI Impacts surveyed 2,778 AI researchers in 2023 about their expectations for AI progress, safety risks, and transformative milestones. The survey captures expert probability estimates for high-level machine intelligence, catastrophic risk, and the importance of AI safety research. It is one of the largest and most comprehensive surveys of AI researcher opinion on existential and transformative AI risk.

★★★☆☆

A Metaculus forecasting question asking community members to predict when the first generally capable AI system will be created and publicly announced. The question aggregates probabilistic forecasts from many forecasters to produce a community estimate on AGI arrival timing.

★★★☆☆

A meta-analysis of approximately 9,800 predictions from AI researchers, entrepreneurs, and prediction markets regarding when AGI will arrive. The analysis synthesizes survey data, individual expert forecasts, and community prediction markets to show a trend toward earlier AGI expectations, with scientists predicting the 2040s and community forecasters predicting the 2030s.

1580,000 Hours AGI Timelines Review80,000 Hours·Benjamin Todd·2025

A comprehensive synthesis by 80,000 Hours reviewing expert predictions on AGI timelines from multiple groups including AI lab leaders, researchers, and forecasters. The review finds a notable convergence toward shorter timelines, with many estimates suggesting AGI could arrive before 2030. Different expert communities that previously disagreed are now showing increasingly similar estimates.

★★★☆☆
16AGI Timelines Dashboardagi.goodheartlabs.com

An interactive dashboard aggregating and visualizing AGI timeline forecasts from major prediction markets and forecasting platforms including Metaculus, Manifold Markets, and Kalshi. It displays median year predictions and probability distributions for milestones such as 'weakly general AI,' 'general AI,' and passing the Turing Test, allowing users to download underlying data.

Sam Altman argues that humanity has crossed an inflection point in AI development, with superintelligence within reach and the hardest scientific obstacles already overcome. He forecasts rapid capability gains through the late 2020s—agents, novel-insight systems, and physical robots—and suggests that abundant intelligence and energy will fundamentally transform human progress, while daily life may feel more continuous than discontinuous.

A long-form podcast interview with Anthropic CEO Dario Amodei covering AI safety philosophy, Anthropic's approach to alignment and Constitutional AI, risks from advanced AI systems, and the competitive dynamics of the AI industry. Amodei discusses his views on existential risk, the importance of interpretability research, and why he believes safety-focused labs should be at the frontier.

19Samotsvety Forecastingsamotsvety.org

Samotsvety is a team of elite superforecasters known for winning major forecasting competitions by significant margins. They provide forecasting consulting and publish predictions on high-stakes questions, including topics relevant to AI risk and global catastrophic risks. Their work represents a rigorous probabilistic approach to assessing uncertain futures.

Related Wiki Pages

Top Related Pages

Approaches

AI-Augmented Forecasting

Risks

Emergent Capabilities

Analysis

Capabilities-to-Safety Pipeline ModelDeceptive Alignment Decomposition ModelTimelines Wiki

Other

Sam AltmanDemis HassabisElon Musk

Organizations

Google DeepMindOpenAIAnthropic

Key Debates

When Will AGI Arrive?The Case For AI Existential Risk

Concepts

Large Language ModelsLarge Language ModelsEpistemic Orgs OverviewAgi DevelopmentTool Use and Computer UseScientific Research Capabilities

Historical

Deep Learning Revolution Era