Longterm Wiki
Updated 2026-02-01HistoryData
Page StatusContent
Edited 12 days ago2.4k words1 backlinks
50
QualityAdequate
28
ImportancePeripheral
10
Structure10/15
201902%10%
Updated every 3 weeksDue in 1 weeks
Summary

AI Futures Project is a nonprofit co-founded in 2024 by Daniel Kokotajlo, Eli Lifland, and Thomas Larsen that produces detailed AI capability forecasts, most notably the AI 2027 scenario depicting rapid progress to superintelligence. The organization has revised timelines significantly over time and faces substantial criticism for aggressive assumptions and methodological limitations.

AI Futures Project

Organization

AI Futures Project

AI Futures Project is a nonprofit co-founded in 2024 by Daniel Kokotajlo, Eli Lifland, and Thomas Larsen that produces detailed AI capability forecasts, most notably the AI 2027 scenario depicting rapid progress to superintelligence. The organization has revised timelines significantly over time and faces substantial criticism for aggressive assumptions and methodological limitations.

2.4k words · 1 backlinks

Quick Assessment

AspectDetails
TypeResearch organization, 501(c)(3) nonprofit
Founded2024
FoundersDaniel Kokotajlo, Eli Lifland, and Thomas Larsen
FocusAI capability forecasting, AGI/ASI timelines, scenario planning
Flagship WorkAI 2027 scenario forecast (April 2025)
FundingCharitable donations and grants from Survival and Flourishing Fund and private donors
Key PersonnelDaniel Kokotajlo, Eli Lifland, Thomas Larsen, Romeo Dean, Jonas Vollmer, Lauren Mangla

Key Links

SourceLink
Official Websiteai-futures.org
Wikipediaen.wikipedia.org

Overview

The AI Futures Project is a nonprofit research organization that specializes in forecasting the development and societal impact of advanced artificial intelligence, including potential paths to AGI and superintelligence. Founded in 2024 by Daniel Kokotajlo, Eli Lifland, and Thomas Larsen, the organization emerged from concerns about transparency and safety practices at leading AI companies. Kokotajlo is a former governance researcher in OpenAI's governance division.

The organization's primary mission is to develop detailed scenario forecasts of advanced AI system trajectories to inform policymakers, researchers, and the public. Rather than producing high-level summaries, the AI Futures Project creates step-by-step projections with explicit modeling of capability milestones, timelines, and societal impacts. The organization operates with a small research staff and network of advisors from AI policy, forecasting, and risk analysis fields.

Beyond written reports, the AI Futures Project conducts tabletop exercises and workshops based on its scenarios, involving participants from academia, technology, and public policy. The organization is registered as a 501(c)(3) nonprofit with EIN 99-4320292 and is funded entirely through charitable donations and grants.

History and Founding

Daniel Kokotajlo's Background

The AI Futures Project traces its origins to Daniel Kokotajlo's work on AI scenario planning, beginning with his influential 2021 forecast "What 2026 Looks Like," which examined potential AI developments from 2022-2026. This early work established Kokotajlo's reputation for detailed, grounded AI forecasting.

In 2022, Kokotajlo joined OpenAI's policy team, where he worked on governance and scenario planning. His tenure there ended in April 2024 when he left the company in what became a high-profile departure covered by major media outlets including the New York Times.

Departure from OpenAI

Kokotajlo left OpenAI citing concerns that the company was prioritizing rapid product development over AI safety and advancing without sufficient safeguards. He criticized what he saw as a lack of transparency at top AI companies. In making this decision, Kokotajlo attempted to forfeit his equity, though it was ultimately not taken away. He avoided signing non-disparagement agreements that would have restricted his ability to speak publicly about AI risks.

This departure positioned Kokotajlo as one of several researchers who left leading AI labs over safety concerns, joining a broader conversation about the pace of AI development and the adequacy of safety measures.

Establishing the Organization

Following his departure from OpenAI in mid-2024, Kokotajlo founded the AI Futures Project, originally named Artificial Intelligence Forecasting Inc. The organization was established to continue independent AI forecasting work without the constraints of working within a major AI company.

The project assembled a small team of researchers and advisors with backgrounds in AI forecasting, policy, and technical research. Key early personnel included Eli Lifland (a top-ranked superforecaster), Thomas Larsen (founder of the Center for AI Policy), and Romeo Dean (a Harvard graduate with expertise in AI chip forecasting).

Key Personnel

Core Team

Daniel Kokotajlo serves as Executive Director, overseeing research and policy recommendations. Beyond his OpenAI background, Kokotajlo was featured in Time100 AI 2024, recognizing his influence in the AI safety discourse. He leads the organization's scenario forecasting work and serves as primary author on major publications.

Eli Lifland works as a researcher focusing on scenario forecasting and AI capabilities. Lifland brings exceptional forecasting credentials, ranking #1 on the RAND Forecasting Initiative all-time leaderboard. He co-founded and advises Sage (which creates interactive AI explainers), worked on Elicit (an AI-powered research assistant), and co-created TextAttack, a Python framework for adversarial text examples.

Thomas Larsen serves as a researcher specializing in scenario forecasting and the goals and impacts of AI agents. He founded the Center for AI Policy, an AI safety advocacy organization, and previously conducted AI safety research at the Machine Intelligence Research Institute.

Romeo Dean focuses his research on forecasting AI chip production and usage. He graduated cum laude from Harvard with a concurrent master's in computer science, with focuses on security, hardware, and systems. He previously served as an AI Policy Fellow at the Institute for AI Policy and Strategy.

Jonas Vollmer initially served as Chief Operating Officer, handling communications and operations. Vollmer is also involved with Macroscopic Ventures, an AI venture fund and philanthropic foundation that was an early investor in Anthropic (now valued at $60 billion). He co-founded the Atlas Fellowship, a global talent program, and the Center on Long-Term Risk, an AI safety nonprofit.

Lauren Mangla was added as Head of Special Projects in January 2026, replacing Jonas Vollmer in the organizational structure. She works on AI 2027 tabletop exercises, communications, and hiring. Mangla previously managed fellowships and events at Constellation (an AI safety research center), served as Executive Director of the Supervised Program for Alignment Research (SPAR), and held internships at NASA, the Department of Transportation, and New York City policy organizations.

Contributors

Scott Alexander, the prominent blogger behind Astral Codex Ten, has contributed to the project both as an editor and author. He significantly rewrote content for the AI 2027 project to enhance engagement and has authored related blog posts analyzing the project's models, such as "Beyond The Last Horizon."

The project has also received support from Lightcone Infrastructure (Oliver Habryka, Rafe Kennedy, Raymond Arnold) for website development and from FutureSearch (Tom Liptay, Finn Hambly, Sergio Abriola, Tolga Bilge) for forecasting work.

Major Research and Publications

AI 2027: Flagship Scenario Forecast

The organization's most significant output is AI 2027, a comprehensive scenario forecast released in April 2025. The report was authored by Kokotajlo, Lifland, Larsen, and Dean, with editing by Scott Alexander.

AI 2027 presents a detailed examination of potential AI developments from 2025 through 2027, depicting very rapid progress in AI capabilities. The scenario includes the rise of increasingly capable AI agents in 2026, full coding automation by early 2027, and an intelligence explosion by late 2027. The forecast includes specific timelines models, capability trajectories, and simulations predicting superhuman coders by 2027.

Rather than presenting a single prediction, AI 2027 offers two contrasting endings. The first depicts catastrophic loss of human control driven by international competition, particularly between the United States and China. The second shows coordinated global action that slows AI development and implements stronger oversight, leading to more positive outcomes. The authors emphasize that these are hypothetical planning tools designed to explore different scenarios rather than literal forecasts of what will happen.

The report received significant attention, attracting substantial traffic to the AI Futures Project website and generating widespread discussion through associated media appearances including a Dwarkesh podcast interview. The scenario has been cited in U.S. policy discussions, with some political figures referencing its warnings about international AI coordination challenges.

December 2025 Timelines Update

In December 2025, the AI Futures Project released a significant update to its timelines and takeoff model, which governs predictions about when AIs will reach key capability milestones such as Automated Coder (AC) and superintelligence (ASI).

Notably, this updated model predicts 3-5 years longer timelines to full coding automation compared to the April 2025 AI 2027 forecast. The organization attributed this extension primarily to more conservative modeling of pre-automation AI R&D speedups and recognition of potential data bottlenecks. The update noted that underlying trends include a slowdown in pretraining scaling since what the team calls the "GPT-4.5 debacle" and massive efforts by AI labs to create high-quality human-curated data, with spending reaching single-digit billions.

The December update acknowledged that the timeline extension resulted from correcting unknown model limitations and mistakes rather than from significant new empirical evidence. However, the model did incorporate new data from organizations like METR showing that coding task horizons have been doubling approximately every 7 months, with even faster progress of 4-5 months observed since 2024.

Evolution of Timelines Estimates

The organization has demonstrated willingness to update its forecasts as new information emerges. Kokotajlo's personal TED-AI median has followed this trajectory:

  • 2018: ~2070
  • Early 2020: ~2050
  • November 2020: ~2030
  • August 2021: ~2029
  • December 2022: ~2027
  • February 2025: ~2028
  • January 2026: ~2031 (December 2030)

These revisions reflect both new empirical data and corrections to modeling assumptions. The organization consistently emphasizes high uncertainty about AGI and superintelligence timelines, stating they cannot confidently predict a specific year.

Blog and Ongoing Research

Beyond major reports, the AI Futures Project maintains an active blog (blog.ai-futures.org) with analysis of AI progress and forecasting methodology. Key posts include:

  • "Our first project: AI 2027" by Daniel Kokotajlo, announcing the flagship project
  • "Beyond The Last Horizon" by Scott Alexander, analyzing METR data on coding task horizons and the acceleration of progress
  • Ongoing analysis of AI R&D multipliers (estimated around 1.5x) and superhuman coder predictions

The organization continues to track AI progress against its models, noting developments like Claude 3.7 Sonnet achieving 80% success on 15-minute human-equivalent coding tasks.

Funding

The AI Futures Project operates as a 501(c)(3) nonprofit funded entirely through charitable donations and grants. The organization has received grant funding from the Survival and Flourishing Fund.

In September 2025, the organization received significant additional funding: a $1.44 million grant from the Survival and Flourishing Fund and $3.05 million from private donors. The Survival and Flourishing Fund also offered to match the next $500,000 in donations.

The organization accepts donations online and through major donor-advised fund providers using its EIN 99-4320292. However, specific details about total funding amounts, additional major donors, or comprehensive financial information are not publicly available.

Reception and Impact

Positive Reception

The AI 2027 scenario has been praised for its level of detail and usefulness for strategic planning. The project's approach of providing step-by-step projections rather than high-level summaries has been valued by researchers and policymakers seeking to understand potential AI trajectories.

The organization's forecasting work has achieved influence within the AI safety community. Models are referenced in discussions on LessWrong, in METR analyses, and in various AI safety blogs. The project's willingness to provide detailed, falsifiable predictions has been seen as advancing discourse beyond vague warnings about AI risks.

Kokotajlo's 2021 forecasting work "What 2026 Looks Like" has been noted for its prescience, with many predictions matching actual AI developments observed in the 2020s. This track record has lent credibility to the organization's more recent forecasting efforts.

Criticisms and Controversies

The AI Futures Project's work has attracted substantial criticism, particularly regarding its methodological approach and timeline predictions.

Aggressive Timelines: Critics have characterized the organization's timelines as "implausibly aggressive," arguing that the models overestimate the pace of AI progress. The authors themselves have revised forecasts post-2025 toward slower paths to autonomous AI, acknowledging that initial projections were too optimistic.

Methodological Concerns: The modeling structure has been described as "highly questionable" with little empirical validation. Critics point to reliance on limited datapoints, such as METR's horizon doubling trend, which they argue is not sufficiently robust as a standalone indicator. The models' superexponential trends have been characterized as unresponsive to perturbations in starting conditions.

Uncertainty Underestimation: Analysts have argued that even the "massive uncertainty blobs" in the project's models represent "severe underestimates" of actual uncertainty. Some have suggested that plans relying on such forecasts are "doomed" and that robust strategies accounting for wider uncertainty ranges are needed instead.

Hype and Fictional Elements: Some critics have labeled the work as "fiction, not science," arguing it relies on "fictional numbers" and indefinite exponentials. Concerns have been raised that the scenarios stir fear, uncertainty, and doubt in ways that could be counterproductive, potentially acting as marketing that fuels funding for leading AI labs and intensifying the US-China AI race through conflict narratives.

Benchmark Gaps: Critics highlight that the models ignore important factors such as tacit knowledge requirements, proprietary data needs, and potential interruptions to compute availability or capitalization. The 80% reliability threshold used in some predictions has been characterized as "shockingly unreliable" for autonomous transformative AI systems.

Ecosystem Assumptions: The scenarios have been criticized for assuming a limited AGI ecosystem with only 1-2 dominant models, rather than anticipating a "Cambrian explosion" of diverse AI systems. This assumption may significantly affect the plausibility of the rapid takeoff scenarios depicted.

Societal Response Underestimation: Some analysts argue the scenarios underplay potential public backlash and government interventions that could slow AI development. Regulatory actions, data center restrictions, and compute lead requirements could potentially extend timelines to 5-10 years beyond the project's predictions.

Community Response

Within the rationalist and effective altruism communities, opinions on the AI Futures Project's work are mixed. Active discussions occur on LessWrong and the Effective Altruism Forum about the project's timelines, methodology, and implications.

Community members value the modeling transparency that allows for detailed evaluation of assumptions and arguments. However, there is substantial debate about whether the scenarios accelerate dangerous AI races, the adequacy of control measures proposed, and whether alignment challenges are being sufficiently addressed.

Key Uncertainties

Several major uncertainties affect the reliability and implications of the AI Futures Project's forecasting work:

Capability Progress Rates: Whether AI capabilities will continue to advance at the rapid pace projected, or whether fundamental limitations, data bottlenecks, or algorithmic challenges will slow progress substantially.

Benchmark-to-Deployment Gaps: The extent to which improvements on AI benchmarks translate to real-world task automation, particularly for complex cognitive work requiring tacit knowledge, judgment, and integration of multiple capabilities.

Reliability Thresholds: What level of reliability is necessary for AI systems to effectively automate various categories of cognitive work, and how quickly AI systems will achieve those thresholds.

Coordination Possibilities: Whether international coordination to slow or regulate AI development is feasible given competitive pressures, national security concerns, and the distributed nature of AI research.

Alignment Difficulty: The fundamental difficulty of aligning increasingly capable AI systems with human values, and whether alignment techniques will scale to superhuman AI systems.

Societal and Regulatory Response: How governments, institutions, and publics will respond to rapid AI advancement, including possibilities for regulation, restrictions, or changes in social license for AI development.

Economic Incentives: Whether economic pressures will drive continued rapid AI investment and deployment, or whether market dynamics, resource constraints, or profitability challenges will moderate the pace of development.

Related Pages

Top Related Pages

People

Vipul NaikLeopold Aschenbrenner

Labs

Palisade Research

Analysis

Anthropic IPO

Concepts

LessWrongInternational CoordinationElicit (AI Research Tool)Eli LiflandSurvival and Flourishing FundFutureSearch

Organizations

ControlAIFrontier Model ForumSituational Awareness LPSamotsvetyFutureSearchFuture of Life Institute (FLI)