Reputation-based prediction aggregation platform that has become the primary source for AI timeline forecasts, with over 1 million predictions across 15,000+ questions. Created by Anthony Aguirre (FLI/FLF President).
AI-Augmented ForecastingApproachAI-Augmented ForecastingAI-augmented forecasting combines AI computational strengths with human judgment, achieving 5-15% Brier score improvements and 50-200x cost reductions compared to human-only forecasting. However, A...Quality: 54/100Prediction Markets (AI Forecasting)ApproachPrediction Markets (AI Forecasting)Prediction markets achieve Brier scores of 0.16-0.24 (15-25% better than polls) by aggregating dispersed information through financial incentives, with platforms handling $1-3B annually. For AI saf...Quality: 56/100
Analysis
AI Forecasting Benchmark TournamentProjectAI Forecasting Benchmark TournamentQuarterly competition (Q2 2025: 348 questions, 54 bot-makers, $30K prizes) comparing human Pro Forecasters against AI bots, with statistical testing showing humans maintain significant lead (p=0.00...Quality: 41/100AI Risk Activation Timeline ModelAnalysisAI Risk Activation Timeline ModelComprehensive framework mapping AI risk activation windows with specific probability assessments: current risks already active (disinformation 95%+, spear phishing active), near-term critical windo...Quality: 66/100Capability-Alignment Race ModelAnalysisCapability-Alignment Race ModelQuantifies the capability-alignment race showing capabilities currently ~3 years ahead of alignment readiness, with gap widening at 0.5 years/year driven by 10²⁶ FLOP scaling vs. 15% interpretabili...Quality: 62/100XPT (Existential Risk Persuasion Tournament)ProjectXPT (Existential Risk Persuasion Tournament)A 2022 forecasting tournament with 169 participants found superforecasters severely underestimated AI progress (2.3% probability for IMO gold vs actual 2025 achievement) and gave 8x lower AI extinc...Quality: 54/100
Other
Anthony AguirrePersonAnthony AguirrePhysicist and AI safety advocate serving as Executive Director of the Future of Life Institute and President of the Future of Life Foundation. Faggin Presidential Professor for Physics of Informati...Max TegmarkPersonMax TegmarkComprehensive biographical profile of Max Tegmark covering his transition from cosmology to AI safety advocacy, his role founding the Future of Life Institute, and his controversial Mathematical Un...Quality: 63/100Eli LiflandPersonEli LiflandBiographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his for...Quality: 58/100
Concepts
AI TimelinesConceptAI TimelinesForecasts and debates about when transformative AI capabilities will be developedQuality: 95/100Long-Timelines Technical WorldviewConceptLong-Timelines Technical WorldviewComprehensive overview of the long-timelines worldview (20-40+ years to AGI, 5-20% P(doom)), arguing for foundational research over rushed solutions based on historical AI overoptimism, current sys...Quality: 91/100Epistemic Orgs OverviewEpistemic Orgs OverviewOrganizations advancing forecasting methodology, prediction aggregation, and epistemic infrastructure to improve decision-making on AI safety and existential risks.Quality: 70/100Novel / Unknown ApproachesCapabilityNovel / Unknown ApproachesAnalyzes probability (1-15%) of novel AI paradigms emerging before transformative AI, systematically reviewing historical prediction failures (expert AGI timelines shifted 43 years in 4 years, 13 y...Quality: 53/100
Organizations
Epoch AIOrganizationEpoch AIEpoch AI maintains comprehensive databases tracking 3,200+ ML models showing 4.4x annual compute growth and projects data exhaustion 2026-2032. Their empirical work directly informed EU AI Act's 10...Quality: 51/100Forecasting Research Institute (FRI)OrganizationForecasting Research Institute (FRI)FRI's XPT tournament found superforecasters gave 9.7% average probability to AI progress outcomes that occurred vs 24.6% from domain experts, suggesting superforecasters systematically underestimat...Quality: 55/100Bridgewater AIA LabsOrganizationBridgewater AIA LabsBridgewater AIA Labs launched a $2B AI-driven macro fund in July 2024 that returned 11.9% in 2025, using proprietary ML models plus LLMs from OpenAI/Anthropic/Perplexity with multi-layer guardrails...Quality: 66/100Rethink PrioritiesOrganizationRethink PrioritiesRethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across animal welfare, global health, and AI governance. Th...Quality: 60/100FutureSearchOrganizationFutureSearchFutureSearch is an AI forecasting startup founded by former Metaculus leaders that combines LLM research agents with human judgment, demonstrating some prediction accuracy but facing uncertain comm...Quality: 50/100
Key Debates
The Case For AI Existential RiskArgumentThe Case For AI Existential RiskComprehensive formal argument that AI poses 5-14% median extinction risk by 2100 (per 2,788 researcher survey), structured around four premises: capabilities will advance, alignment is hard (with d...Quality: 66/100Is AI Existential Risk Real?CruxIs AI Existential Risk Real?Covers the foundational AI x-risk debate across four core cruxes: instrumental convergence, warning sign availability, corrigibility achievability, and timeline urgency. Incorporates quantitative e...Quality: 12/100