AGI Timeline
AGI Timeline
Comprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current predictions clustering around 2027-2045 median (50% probability). Aggregates 9,300+ predictions across expert surveys, prediction markets, and lab leader statements, documenting key uncertainties around scaling limits, definitions, and technical bottlenecks.
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Median Expert Forecast (2026) | 2040-2047 (50% HLMI) | AI Impacts 2023 Survey found 50% probability of HLMI by 2047, down 13 years from 2022 |
| Prediction Markets | 2027-2031 median | Metaculus forecasters predict median of November 2027 (1,700+ forecasters) |
| Lab Leader Estimates | 2026-2029 | Sam Altman, Dario Amodei, and Demis Hassabis converge on late 2020s |
| Timeline Trend | Rapidly shortening | Expert median dropped from 2061 (2018) → 2059 (2022) → 2047 (2023); Metaculus dropped from 50 years to 5 years since 2020 |
| Uncertainty Range | Very high (±15-20 years) | 80% confidence intervals span 2026-2045+ across forecasts |
| Definition Sensitivity | High | Different AGI definitions shift predictions by 10-20 years |
| Confidence Level | Low-Medium | Expert surveys show framing effects of 15+ years; historical predictions consistently too pessimistic |
Key Links
| Source | Link |
|---|---|
| Related Resource | timelines.issarice.com |
| Wikipedia | en.wikipedia.org |
Overview
AGI timeline predictions represent attempts to forecast when artificial intelligence will match or exceed human cognitive abilities across all domains. Current expert consensus suggests a 50% probability of AGI development between 2040-2050, though estimates vary widely based on AGI definitions and measurement criteria.
Recent surveys show accelerating timelines compared to historical predictions. The 2023 AI Impacts survey↗🔗 web★★★☆☆AI Impacts2023 AI Impacts surveyThis is a landmark empirical survey of AI researcher opinion; frequently cited as evidence of expert concern about AI risk and as a reference point for AI timeline forecasting in safety discussions.AI Impacts surveyed 2,778 AI researchers in 2023 about their expectations for AI progress, safety risks, and transformative milestones. The survey captures expert probability es...ai-safetyexistential-riskcapabilitiesalignment+2Source ↗ found median expert predictions of 2045 for "High-Level Machine Intelligence," while Metaculus prediction markets↗🔗 web★★★☆☆MetaculusMetaculus prediction marketsThis Metaculus question is frequently cited in AI safety discussions as a crowd-sourced benchmark for AGI timeline expectations; useful for comparing against expert forecasts and tracking how community estimates evolve.A Metaculus forecasting question tracking community predictions on when artificial general intelligence will be achieved. Aggregates probabilistic estimates from forecasters wor...capabilitiesai-safetyexistential-riskevaluation+1Source ↗ aggregate to approximately 2040-2045. However, significant uncertainty remains around capability thresholds, measurement methodologies, and potential discontinuous progress.
AGI Timeline Factors
Diagram (loading…)
flowchart TD
subgraph DRIVERS["Accelerating Factors"]
COMPUTE[Compute Scaling<br/>10x per 18 months]
ALGO[Algorithmic Improvements<br/>2x efficiency per year]
INVEST[Investment Surge<br/>$100B+ training clusters by 2027]
DATA[Data Availability<br/>Synthetic and multimodal]
end
subgraph BRAKES["Decelerating Factors"]
SCALING[Scaling Limits<br/>Diminishing returns]
ENERGY[Energy Constraints<br/>Data center capacity]
REGULATION[Regulatory Barriers<br/>AI governance]
SAFETY[Safety Research<br/>Alignment requirements]
end
subgraph UNCERTAINTY["Key Uncertainties"]
PARADIGM[Paradigm Shifts<br/>New architectures needed?]
DEFINITION[Definition Disputes<br/>What counts as AGI?]
MEASURE[Measurement<br/>How to evaluate AGI?]
end
DRIVERS --> TIMELINE[AGI Timeline<br/>Forecast]
BRAKES --> TIMELINE
UNCERTAINTY --> TIMELINE
TIMELINE --> SHORT[Optimistic: 2026-2029]
TIMELINE --> MED[Median: 2035-2045]
TIMELINE --> LONG[Conservative: 2050+]
style SHORT fill:#ffcccc
style MED fill:#ffffcc
style LONG fill:#ccffcc
style TIMELINE fill:#e6f3ffAGI Timeline Risk Assessment
| Factor | Assessment | Timeline Impact | Source |
|---|---|---|---|
| Expert Survey Median | 2040-2050 | Baseline estimate | AI Impacts 2023↗🔗 web★★★☆☆AI ImpactsAI ImpactsAI Impacts is a key empirical research hub for AI safety; its expert surveys and wiki pages are frequently cited in discussions about AI timelines, risk probability, and strategic forecasting within the broader AI safety community.AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and exis...ai-safetyexistential-riskcapabilitiesevaluation+3Source ↗ |
| Prediction Market Aggregate | 2040-2045 | Market consensus | Metaculus↗🔗 web★★★☆☆MetaculusMetaculus Forecasting PlatformMetaculus is widely used in the AI safety and EA communities as a reference for probabilistic forecasts on AI timelines and risk-relevant events; useful for grounding strategic discussions in calibrated uncertainty estimates.Metaculus is a collaborative online forecasting platform where users make probabilistic predictions on future events across domains including AI development, biosecurity, and gl...existential-riskai-safetygovernanceevaluation+4Source ↗ |
| Lab Leader Statements | 2025-2035 | Optimistic bound | OpenAI↗🔗 web★★★★☆OpenAIOpenAI Official HomepageOpenAI is a central organization in the AI safety and capabilities landscape; this homepage links to their models, research publications, and policy positions, making it a useful reference point for tracking frontier AI development.OpenAI is a leading AI research and deployment company focused on building advanced AI systems, including GPT and o-series models, with a stated mission of ensuring artificial g...capabilitiesalignmentgovernancedeployment+5Source ↗, DeepMind↗🔗 web★★★★☆Google DeepMindGoogle DeepMind Official HomepageGoogle DeepMind is a major frontier AI lab whose research and policies are highly relevant to AI safety; this homepage provides entry point to their publications, safety frameworks, and organizational positions on AI risk.Google DeepMind is a leading AI research laboratory combining the former DeepMind and Google Brain teams, focused on developing advanced AI systems and conducting research acros...capabilitiesai-safetygovernancealignment+4Source ↗ |
| Scaling Limitations | 2050+ | Conservative bound | Epoch AI↗🔗 web★★★★☆Epoch AIEpoch AI - AI Research and Forecasting OrganizationEpoch AI is a key reference organization for empirical data on AI scaling trends; their compute and training run databases are widely cited in AI safety and governance discussions.Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progr...capabilitiescomputegovernancepolicy+4Source ↗ |
Expert Survey Results
Recent Survey Data (2023-2026)
| Survey | Year | Sample Size | Median AGI Timeline | Key Finding | Source |
|---|---|---|---|---|---|
| AI Impacts ESPAI | 2023 | 2,778 experts | 2047 (HLMI) | 13-year drop from 2060 in 2022 | AI Impacts |
| Digital Minds Survey | 2025 | 67 experts | 2050 (50% probability) | 20% by 2030, 40% by 2040 | Digital Minds Report |
| AI Multiple Meta-Analysis | 2026 | 9,300 predictions | 2040 (aggregated) | Synthesized all public forecasts | AI Multiple |
| Metaculus Community | 2026 | 1,700+ forecasters | Nov 2027 median | 80% CI: July 2026 - Feb 2031 | Metaculus |
| Samotsvety Superforecasters | 2023 | 15 forecasters | 28% by 2030 | Professional forecasters more conservative | 80,000 Hours |
Timeline Acceleration Trends
Expert timelines have consistently shortened over the past decade, with dramatic acceleration since 2022:
| Year | Expert Median (HLMI) | Metaculus Median | Change from Previous |
|---|---|---|---|
| 2018 | 2061 | 2070+ | Baseline |
| 2022 | 2059-2060 | 2055 | -2 years |
| 2023 | 2045-2047 | 2040 | -13 to -15 years |
| 2024 | ≈2040 | 2035 | -5 years |
| 2025 | ≈2035 | 2030 | -5 years |
| 2026 | Varied | Nov 2027 | -3 years |
The 80,000 Hours analysis notes that "in four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to five years." Historical expert predictions have consistently been too pessimistic—in 2022, researchers thought AI wouldn't write simple Python code until ~2027, but AI met that threshold by 2023-2024.
Leading AI researchers↗🔗 web★★★★☆AnthropicLeading AI researchersThis URL appears broken (404); the actual Anthropic model spec is a highly important alignment document, but this link does not currently serve that content and should be updated or replaced with a working URL.This URL points to Anthropic's published model specification (often called 'Claude's Constitution'), which outlines the values, principles, and behavioral guidelines that govern...alignmentai-safetygovernancetechnical-safety+2Source ↗ increasingly cite rapid scaling of language models and emergent capabilities as evidence for shorter timelines.
Prediction Market Analysis
Metaculus Aggregates (January 2026)
| Question | Current Prediction | Confidence Interval | Forecasters | Source |
|---|---|---|---|---|
| First General AI Announced | Nov 30, 2027 median | July 2026 - Feb 2031 (80%) | 1,700+ | Metaculus |
| Weakly General AI | Nov 2033 | Dec 2028 - Sep 2045 | 1,800+ | Metaculus |
| Transformative AI | 2031 median | 2027-2045 (80%) | 1,000+ | AGI Dashboard |
| AGI by 2030 | ≈40% probability | 25-55% range | Aggregated | Market consensus |
| AGI by 2040 | ≈75% probability | 60-85% range | Aggregated | Market consensus |
Platform Comparison
| Platform | AGI Median | 50% Probability Year | Key Difference |
|---|---|---|---|
| Metaculus | Mid-2030 | 2030-2031 | Stricter definition requiring robotics |
| Manifold | 2028 | ≈50% before 2028 | More aggressive, market-based |
| Polymarket | 2029-2030 | ≈45% by 2029 | Real-money incentives |
| Expert Surveys | 2040-2047 | 2040-2045 | Academic conservatism |
Market Dynamics
Prediction markets show several notable patterns:
- Dramatic shortening: Metaculus dropped from 50 years to 5 years median since 2020
- Volatility spikes following major capability announcements (GPT-4, Claude 3, o1, o3)
- Shorter timelines in technical communities vs. academic surveys (10-15 year gap)
- Definition sensitivity with different AGI operationalizations varying by 10-20 years
Lab Leader Statements
Industry Timeline Claims (Updated January 2026)
| Organization | Leader | Claimed Timeline | Key Statement | Source |
|---|---|---|---|---|
| OpenAI | Sam Altman | 2025-2028 | "We are now confident we know how to build AGI"; 2026 models will "amaze us" | Sam Altman Blog |
| Anthropic | Dario Amodei | 2026-2027 | "AI may surpass humans in most tasks by 2027"; "rapidly running out of convincing blockers" | Lex Fridman Interview |
| DeepMind | Demis Hassabis | "Within this decade" (by 2030) | "I'd bet on achieving what you might call AGI within the next few years" | Nature interview 2024↗📄 paper★★★★★Nature (peer-reviewed)Nature interview 2024This URL resolves to the Nature magazine homepage rather than a specific AI safety paper or interview; the title 'Nature interview 2024' does not match the content, and the resource should be updated with a direct article URL for meaningful reference.This is the homepage of Nature, a leading multidisciplinary scientific journal, displaying current news and research articles. The visible content includes stories on AI's influ...capabilitiesgovernanceai-safetypolicy+1Source ↗ |
| DeepMind | Shane Legg | 50% by 2028 | "Minimal AGI" prediction (January 2026) | DeepMind cofounder |
| Meta | Yann LeCun | "Many decades away" | Skeptical of current paradigm reaching AGI | Public statements 2024↗🔗 web★★★★☆Meta AIPublic statements 2024This is Meta's official AI landing page; useful as a primary source for tracking Meta's public AI strategy, product announcements, and stated alignment priorities, but contains minimal technical depth on safety methodology.Meta's official AI homepage showcases their broad research and product portfolio including Llama 4 (large language models), Segment Anything Model 3 (computer vision), V-JEPA 2 ...capabilitiesalignmentdeploymentgovernance+3Source ↗ |
| xAI | Elon Musk | 2026 | AI "smarter than any single human" | Public statements |
Implied Timelines from Investment Plans
Several labs' public roadmaps suggest aggressive acceleration:
| Metric | 2024 | 2025 | 2026 | 2027 | Source |
|---|---|---|---|---|---|
| Training Run Cost | ≈$100M | ≈$1B | $10B+ | $100B clusters | Dario Amodei |
| Compute per Training | Baseline | 3-10x | 30-100x | 300-1000x | Scaling projections |
| Data Center Power | 100-500 MW | 500 MW-1 GW | 1-5 GW | 5-10 GW | Industry reports |
| Researcher FTEs | 5,000+ | 10,000+ | 20,000+ | 50,000+ | Lab hiring plans |
Key Uncertainty Factors
Definition Problems
| AGI Definition | Timeline Range | Key Challenge |
|---|---|---|
| Human-level performance | 2030-2040 | Benchmark gaming |
| Economic substitution | 2040-2060 | Deployment lags |
| Scientific breakthrough | 2035-2050 | Discovery vs. automation |
| Consciousness/sentience | 2050+ | Hard problem of consciousness |
Technical Bottlenecks
Current limitations that may extend timelines:
- Reasoning capabilities: Current models struggle with complex multi-step reasoning
- Long-horizon planning: Limited ability for extended autonomous operation
- Robustness: Brittleness to distribution shifts and adversarial examples
- Sample efficiency: Still require massive training data compared to humans
Scaling Constraints
| Constraint Type | Impact on Timeline | Mitigation Strategies |
|---|---|---|
| Compute hardware | +5-10 years if hits limits | Advanced chip architectures |
| Data availability | +3-5 years | Synthetic data generation |
| Energy requirements | +2-5 years | Efficiency improvements |
| Regulatory barriers | +5-15 years | International coordination |
Current Capability Trajectory
2024 State Assessment
Recent capabilities suggest accelerating progress toward AGI:
- Multi-modal integration: Vision, text, and code in single models
- Tool use: Effective API calls and workflow automation
- Emergent reasoning: Chain-of-thought and constitutional approaches
- Scientific research: Automated hypothesis generation and testing
Projection Methods
| Approach | 2030 Prediction | Methodology | Limitations |
|---|---|---|---|
| Scaling laws | 85% human performance | Extrapolate compute trends | May hit diminishing returns |
| Expert elicitation | 60% probability | Survey aggregation | Bias and overconfidence |
| Benchmark tracking | 90% on specific tasks | Performance trajectory | Narrow evaluation |
| Economic modeling | 40% job automation | Labor substitution | Deployment friction |
Disagreement and Cruxes
Major Points of Contention
Timeline Pessimists (2050+) argue:
- Current paradigms (transformers, scaling) will hit fundamental limits
- Alignment difficulty will require extensive safety research before deployment
- Economic and regulatory barriers will slow deployment
- Key cognitive capabilities (long-horizon planning, true reasoning) may require architectural breakthroughs
Timeline Optimists (2025-2035) contend:
- Scaling laws will continue with current paradigms through 2030+
- Emergent capabilities from larger models will bridge remaining capability gaps
- Competitive pressure and $100B+ investments will accelerate development
- Recent progress (o1, o3 reasoning, agents) shows faster-than-expected capability gains
Key Cruxes
| Question | Impact on Timeline | Current Evidence | Optimist View | Pessimist View |
|---|---|---|---|---|
| Will scaling laws continue? | ±10 years | Mixed signals since GPT-4 | Compute scaling to $100B clusters will unlock new capabilities | Diminishing returns visible; new paradigms needed |
| Can transformers achieve AGI? | ±15-20 years | Chain-of-thought, o1/o3 reasoning | Architecture is sufficient with scale | Fundamental limits on reasoning and planning |
| How hard is alignment? | ±10-15 years | Constitutional AI, RLHF improvements | Tractable with current approaches | Requires deep unsolved problems |
| Will regulation slow progress? | ±5-15 years | EU AI Act, compute governance | Light touch will prevail | Precautionary regulation inevitable |
| Is AGI a single threshold? | ±10 years | Definitional debates | Continuous capability improvement | Discrete capability jumps required |
Timeline Implications
Strategic Considerations
Different timelines imply varying urgency for:
- Safety research: Shorter timelines require immediate focus on alignment solutions
- Governance frameworks: International coordination becomes critical
- Economic preparation: Labor market disruption planning
- Coordination mechanisms: Preventing dangerous racing dynamics
Policy Relevance
Timeline uncertainty affects regulation approaches:
- Precautionary principle: Plan for shortest reasonable timelines
- Adaptive governance: Build flexible frameworks for multiple scenarios
- Research prioritization: Balance capability and safety advancement
Sources & Resources
Primary Research
| Category | Source | Key Contribution |
|---|---|---|
| Expert Surveys | AI Impacts 2023 Survey↗🔗 web★★★☆☆AI ImpactsAI ImpactsAI Impacts is a key empirical research hub for AI safety; its expert surveys and wiki pages are frequently cited in discussions about AI timelines, risk probability, and strategic forecasting within the broader AI safety community.AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and exis...ai-safetyexistential-riskcapabilitiesevaluation+3Source ↗ | Largest expert survey (2,778 respondents) |
| Prediction Markets | Metaculus AGI Questions↗🔗 web★★★☆☆MetaculusMetaculus Forecasting PlatformMetaculus is widely used in the AI safety and EA communities as a reference for probabilistic forecasts on AI timelines and risk-relevant events; useful for grounding strategic discussions in calibrated uncertainty estimates.Metaculus is a collaborative online forecasting platform where users make probabilistic predictions on future events across domains including AI development, biosecurity, and gl...existential-riskai-safetygovernanceevaluation+4Source ↗ | Continuous probability tracking (1,700+ forecasters) |
| Technical Analysis | Epoch AI Scaling Reports↗🔗 web★★★★☆Epoch AIEpoch AI - AI Research and Forecasting OrganizationEpoch AI is a key reference organization for empirical data on AI scaling trends; their compute and training run databases are widely cited in AI safety and governance discussions.Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progr...capabilitiescomputegovernancepolicy+4Source ↗ | Compute and training cost projections |
| Industry Perspectives | OpenAI Planning Documents↗🔗 web★★★★☆OpenAIOpenAI Official HomepageOpenAI is a central organization in the AI safety and capabilities landscape; this homepage links to their models, research publications, and policy positions, making it a useful reference point for tracking frontier AI development.OpenAI is a leading AI research and deployment company focused on building advanced AI systems, including GPT and o-series models, with a stated mission of ensuring artificial g...capabilitiesalignmentgovernancedeployment+5Source ↗ | Lab development roadmaps |
| Meta-Analysis | 80,000 Hours Timeline Review | Synthesis of forecaster disagreements |
2025-2026 Key Sources
| Source | Date | Key Finding | URL |
|---|---|---|---|
| Sam Altman "Gentle Singularity" | Jan 2025 | "We know how to build AGI"; 2026 will see "systems that figure out novel insights" | Blog |
| Dario Amodei Lex Fridman Interview | Nov 2024 | "Rapidly running out of convincing blockers"; 2026-2027 possible | Transcript |
| AI Multiple Meta-Analysis | Jan 2026 | 9,300 predictions analyzed; aggregated median ≈2040 | Analysis |
| Digital Minds Forecasting | 2025 | 67 experts: 20% by 2030, 50% by 2050 | Report |
| AGI Timelines Dashboard | Jan 2026 | Combined forecasts: 2031 median (80% CI: 2027-2045) | Dashboard |
Forecasting Organizations
| Organization | Focus Area | Key Resources |
|---|---|---|
| AI Impacts↗🔗 web★★★☆☆AI ImpactsAI ImpactsAI Impacts is a key empirical research hub for AI safety; its expert surveys and wiki pages are frequently cited in discussions about AI timelines, risk probability, and strategic forecasting within the broader AI safety community.AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and exis...ai-safetyexistential-riskcapabilitiesevaluation+3Source ↗ | Expert surveys and trend analysis | Annual ESPAI survey reports |
| Metaculus↗🔗 web★★★☆☆MetaculusMetaculus Forecasting PlatformMetaculus is widely used in the AI safety and EA communities as a reference for probabilistic forecasts on AI timelines and risk-relevant events; useful for grounding strategic discussions in calibrated uncertainty estimates.Metaculus is a collaborative online forecasting platform where users make probabilistic predictions on future events across domains including AI development, biosecurity, and gl...existential-riskai-safetygovernanceevaluation+4Source ↗ | Prediction markets | AGI timeline questions, AGI Horizons tournament |
| Epoch AI | Compute trends and scaling laws | Technical reports, training cost projections |
| Future of Humanity Institute↗🔗 web★★★★☆Future of Humanity Institute**Future of Humanity Institute**FHI was a pioneering institution in AI safety and existential risk; this archived homepage is useful for historical context and understanding the institutional origins of the field, though the site is no longer actively updated following its April 2024 closure.The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk researc...ai-safetyexistential-riskalignmentgovernance+3Source ↗ | Long-term forecasting | Academic papers (now closed) |
| Samotsvety Forecasting | Superforecaster aggregation | AGI probability estimates |
Related Analysis
- Scaling debates: See scaling law discussion
- Capability analysis: Review core capabilities development
- Timeline uncertainty: Explore forecasting methodology
- Risk implications: Consider takeoff dynamics scenarios
References
OpenAI is a leading AI research and deployment company focused on building advanced AI systems, including GPT and o-series models, with a stated mission of ensuring artificial general intelligence (AGI) benefits all of humanity. The homepage serves as a gateway to their research, products, and policy work spanning capabilities and safety.
Google DeepMind is a leading AI research laboratory combining the former DeepMind and Google Brain teams, focused on developing advanced AI systems and conducting research across capabilities, safety, and applications. The organization is one of the most influential labs in AI development, working on frontier models including Gemini and publishing widely-cited safety and capabilities research.
Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progress. They produce empirical analyses and datasets to inform understanding of AI development trajectories and support better decision-making in AI governance and safety.
The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.
This URL points to Anthropic's published model specification (often called 'Claude's Constitution'), which outlines the values, principles, and behavioral guidelines that govern Claude's responses. However, the page appears to be returning a 404 error, suggesting the content has moved or is unavailable at this URL.
Meta's official AI homepage showcases their broad research and product portfolio including Llama 4 (large language models), Segment Anything Model 3 (computer vision), V-JEPA 2 (world models), and AI glasses hardware. The company organizes its AI work around four research pillars: Communication & Language, Embodiment & Actions, Alignment, and Core Learning & Reasoning. Meta emphasizes open-source development and practical deployment at scale.
AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and existential risk arguments. It maintains a wiki and blog featuring expert surveys, historical analyses, and structured arguments about transformative AI development. Notable outputs include periodic expert surveys on AI progress timelines.
This is the homepage of Nature, a leading multidisciplinary scientific journal, displaying current news and research articles. The visible content includes stories on AI's influence on human expression, China's AI ambitions, and AI-driven memory shortages in labs, alongside biology and neuroscience research. No specific AI safety paper or interview is identifiable from the content provided.
A Metaculus forecasting question tracking community predictions on when artificial general intelligence will be achieved. Aggregates probabilistic estimates from forecasters worldwide, providing a crowd-sourced timeline estimate for AGI development.
Anthropic is an AI safety company focused on building reliable, interpretable, and steerable AI systems. The company conducts frontier AI research and develops Claude, its family of AI assistants, with a stated mission of responsible development and maintenance of advanced AI for long-term human benefit.
Metaculus is a collaborative online forecasting platform where users make probabilistic predictions on future events across domains including AI development, biosecurity, and global catastrophic risks. It aggregates crowd wisdom and expert forecasts to produce calibrated probability estimates on complex questions relevant to long-term planning and existential risk assessment.
AI Impacts surveyed 2,778 AI researchers in 2023 about their expectations for AI progress, safety risks, and transformative milestones. The survey captures expert probability estimates for high-level machine intelligence, catastrophic risk, and the importance of AI safety research. It is one of the largest and most comprehensive surveys of AI researcher opinion on existential and transformative AI risk.
A Metaculus forecasting question asking community members to predict when the first generally capable AI system will be created and publicly announced. The question aggregates probabilistic forecasts from many forecasters to produce a community estimate on AGI arrival timing.
A meta-analysis of approximately 9,800 predictions from AI researchers, entrepreneurs, and prediction markets regarding when AGI will arrive. The analysis synthesizes survey data, individual expert forecasts, and community prediction markets to show a trend toward earlier AGI expectations, with scientists predicting the 2040s and community forecasters predicting the 2030s.
A comprehensive synthesis by 80,000 Hours reviewing expert predictions on AGI timelines from multiple groups including AI lab leaders, researchers, and forecasters. The review finds a notable convergence toward shorter timelines, with many estimates suggesting AGI could arrive before 2030. Different expert communities that previously disagreed are now showing increasingly similar estimates.
An interactive dashboard aggregating and visualizing AGI timeline forecasts from major prediction markets and forecasting platforms including Metaculus, Manifold Markets, and Kalshi. It displays median year predictions and probability distributions for milestones such as 'weakly general AI,' 'general AI,' and passing the Turing Test, allowing users to download underlying data.
Sam Altman argues that humanity has crossed an inflection point in AI development, with superintelligence within reach and the hardest scientific obstacles already overcome. He forecasts rapid capability gains through the late 2020s—agents, novel-insight systems, and physical robots—and suggests that abundant intelligence and energy will fundamentally transform human progress, while daily life may feel more continuous than discontinuous.
A long-form podcast interview with Anthropic CEO Dario Amodei covering AI safety philosophy, Anthropic's approach to alignment and Constitutional AI, risks from advanced AI systems, and the competitive dynamics of the AI industry. Amodei discusses his views on existential risk, the importance of interpretability research, and why he believes safety-focused labs should be at the frontier.
Samotsvety is a team of elite superforecasters known for winning major forecasting competitions by significant margins. They provide forecasting consulting and publish predictions on high-stakes questions, including topics relevant to AI risk and global catastrophic risks. Their work represents a rigorous probabilistic approach to assessing uncertain futures.