Longterm Wiki
Updated 2026-02-01HistoryData
Page StatusContent
Edited 12 days ago1.4k words
58
QualityAdequate
72
ImportanceHigh
12
Structure12/15
20144041%15%
Updated every 6 weeksDue in 5 weeks
Summary

Biographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his forecasting track record, the AI Futures timelines model, and his contributions to AI safety discourse.

Issues2
QualityRated 58 but structure suggests 80 (underrated by 22 points)
Links5 links could use <R> components

Eli Lifland

Person

Eli Lifland

Biographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his forecasting track record, the AI Futures timelines model, and his contributions to AI safety discourse.

Related
Organizations
AI Futures ProjectSamotsvetyMetaculusOpen PhilanthropyLessWrong
1.4k words

Quick Assessment

AttributeAssessment
Primary FocusAGI forecasting, scenario planning, AI governance
Key Achievements#1 RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-lead of Samotsvety forecasting team
Current RolesCo-founder and researcher at AI Futures Project; co-founder/advisor at Sage; guest fund manager at Long Term Future Fund
Educational BackgroundComputer science and economics degrees from University of Virginia
Notable ContributionsAI 2027 scenario forecast; AI Futures timelines model; top-ranked forecasting track record

Key Links

SourceLink
Official Websiteelilifland.com

Overview

Eli Lifland is a forecaster and AI safety researcher who ranks #1 on the RAND Forecasting Initiative all-time leaderboard. He co-leads the Samotsvety forecasting team, which placed first in the CSET-Foretell/INFER competition in 2020, 2021, and 2022.1 His work focuses on AGI timeline forecasting, scenario planning, and AI safety.

Lifland co-founded the AI Futures Project alongside Daniel Kokotajlo and Thomas Larsen, and co-authored AI 2027, a detailed scenario forecast exploring potential AGI development trajectories.23 The project, with contributions from Scott Alexander and Romeo Dean, provides a concrete scenario for how superhuman AI capabilities might emerge, including geopolitical tensions, technical breakthroughs, and alignment challenges.

Lifland also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund.4 He previously worked on Elicit at Ought and co-created TextAttack, a Python framework for adversarial attacks in natural language processing.5

AI Futures Project and AI 2027

Lifland is a co-founder and researcher at the AI Futures Project, a 501(c)(3) organization focused on AGI forecasting, scenario planning, and policy engagement.6 The organization was co-founded with Daniel Kokotajlo (Executive Director, former OpenAI researcher) and Thomas Larsen (founder of the Center for AI Policy).7

The project's flagship output is AI 2027, a detailed scenario forecast released in April 2025 exploring how superintelligence might emerge.8 The scenario was co-authored with Scott Alexander (who primarily assisted with rewriting) and Romeo Dean (who contributed supplements on compute and security considerations).9

The AI 2027 forecast presents a concrete narrative of AI development including:

  • Increasingly capable AI agents automating significant portions of AI research and development10
  • Geopolitical tensions, particularly a US-China AI race, influencing safety decisions and deployment timelines11
  • Alignment challenges, including exploration of safer model series using chain-of-thought reasoning to address failures12
  • Economic impacts, including widespread job displacement13

The project received significant attention and has been discussed in venues including Lawfare Media, ControlAI, and a CEPR webinar.141516

AI Futures Timelines Model

The AI Futures Project maintains a quantitative timelines model that generates probability distributions for key AGI milestones such as Automated Coder (AC) and superintelligence (ASI). The model incorporates benchmark tracking, compute availability, algorithmic progress, and other inputs to produce forecasts that team members then adjust based on their individual judgment.17

Lifland's personal AGI timeline estimates have shifted as new evidence has emerged. His median TED-AI (a general intelligence milestone) forecast has followed this trajectory:18

  • 2021: ~2060
  • July 2022: ~2050
  • January 2024: ~2038
  • Mid-2024: ~2035
  • December 2024: ~2032
  • April 2025: ~2031
  • July 2025: ~2033
  • January 2026: ~2035

The AI Futures Project has emphasized that the AI 2027 scenario was never intended as a confident prediction that AGI would arrive in 2027, and that all team members maintain high uncertainty about when AGI and ASI will be built.19 The December 2025 model update predicted 3-5 years longer timelines to full coding automation compared to the April 2025 AI 2027 forecast, attributed primarily to more conservative modeling of pre-automation AI R&D speedups and recognition of potential data bottlenecks.20

Forecasting Track Record

Lifland ranks #1 on the RAND Forecasting Initiative (CSET-Foretell/INFER) all-time leaderboard.21 On GJOpen, his Brier score of 0.23 outperforms the median of 0.301 (ratio 0.76), and he secured 2nd place in the Metaculus Economist 2021 tournament and 1st in the Salk Tournament as of September 2022.22

As co-lead of the Samotsvety Forecasting team (approximately 15 forecasters), Lifland helped guide the team to first-place finishes in the INFER competition in 2020, 2021, and 2022.23 In 2020, Samotsvety placed 1st with a relative score of -0.912 compared to -0.062 for 2nd place. In 2021, they achieved 1st with a relative score of -3.259 compared to -0.889 for 2nd place. Samotsvety holds positions 1, 2, 3, and 4 in INFER's all-time ranking, with some members achieving Superforecaster status.24

The team has produced public forecasts on critical topics including AI existential risk and nuclear risk.25

Sage and AI Digest

Lifland co-founded Sage, an organization focused on building interactive AI explainers and forecasting tools.26 One of Sage's key projects is AI Digest, which received $550,000 from Coefficient Giving for its work, with an additional $550,000 for forecasting projects.27 The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.

Role in the AI Safety Community

Lifland is active in the AI safety and alignment communities, particularly through LessWrong and the Effective Altruism Forum. He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams).28 He has also been featured in the documentary "Making God," which explores AGI risks.29

Lifland has taken the Giving What We Can Pledge, committing to donate 10% of his lifetime income to effective charities.30

Criticisms and Controversies

Lifland's work, particularly the AI 2027 timelines model, has faced methodological criticism from community members. In a detailed critique posted to LessWrong, the EA Forum, and Substack, forecaster "titotal" described the model's fundamental structure as "highly questionable," with little empirical validation and poor justification for parameters like superexponential time horizon growth curves.31 Titotal argued that models need strong conceptual and empirical justifications before influencing major decisions, characterizing AI 2027 as resembling a "shoddy toy model stapled to a sci-fi short story" disguised as rigorous research.32

Critics have also raised concerns about philosophical overconfidence, warning that popularizing flawed models could lead people to make significant life decisions based on shaky forecasts.33 Others counter that inaction on short timelines could be costlier if the forecasts prove accurate.34

Lifland responded to these criticisms by acknowledging errors and reviewing titotal's critique for factual accuracy. He agreed to changes in the model write-up and paid $500 bounties to both titotal and another critic, Peter Johnson, for identifying issues.3536 The team released a detailed response explaining their reasoning more thoroughly, including their justification for the model's assumptions.37

Other criticisms include:

  • Lack of skeptic engagement: Some community members felt AI 2027 did not sufficiently address skeptical frameworks or justify its models against competing views38
  • Unverifiable predictions: Concerns that some predictions are difficult to validate empirically39

Lifland has been forthright about forecast misses and has regularly updated his timelines as new evidence emerges.40 No major personal controversies or ethical issues have been documented beyond these methodological debates.

Sources

Footnotes

  1. Samotsvety Track Record

  2. AI 2027 About Page

  3. Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027

  4. Eli Lifland Personal Website

  5. Eli Lifland Google Scholar Profile

  6. AI Futures Project About Page

  7. AI Futures Project About Page

  8. AI 2027 About Page

  9. AI 2027 About Page

  10. AI 2027 Website

  11. ControlAI Newsletter - Future of AI Special Edition

  12. AI 2027 Website

  13. AI 2027 Website

  14. Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027

  15. ControlAI Newsletter - Future of AI Special Edition

  16. CEPR Webinar - AI 2027 Scenario Forecast

  17. AI Futures Blog - Clarifying Timelines Forecasts

  18. AI Futures Blog - Clarifying Timelines Forecasts

  19. AI Futures Blog - Clarifying Timelines Forecasts

  20. Marketing AI Institute - Moving Back AGI Timeline

  21. Samotsvety Track Record

  22. Samotsvety Track Record

  23. Samotsvety Track Record

  24. Samotsvety Track Record

  25. EA Forum - Samotsvety's AI Risk Forecasts

  26. Eli Lifland Personal Website

  27. Manifund - AI Digest Project

  28. MATS Program - Eli Lifland Mentor Profile

  29. EA Forum - Making God Documentary

  30. Eli Lifland Personal Website

  31. LessWrong - Deep Critique of AI 2027 Timeline Models

  32. LessWrong - Deep Critique of AI 2027 Timeline Models

  33. EA Forum - Practical Value of Flawed Models

  34. EA Forum - Practical Value of Flawed Models

  35. AI Futures Notes Substack - Response to Titotal Critique

  36. EA Forum - Practical Value of Flawed Models

  37. AI Futures Notes Substack - Response to Titotal Critique

  38. ControlAI Newsletter - Future of AI Special Edition

  39. AI 2027 Website

  40. AI Futures Blog - Clarifying Timelines Forecasts

Related Pages

Top Related Pages

Concepts

OpenAIAGI TimelineLessWrongAI GovernanceMetaculusAGI Development

People

Philip Tetlock (Forecasting Pioneer)Connor Leahy

Models

AI Capability Threshold ModelAI Risk Activation Timeline ModelAI-Bioweapons Timeline Model

Key Debates

AI Risk Critical Uncertainties Model

Approaches

Prediction Markets (AI Forecasting)AI-Augmented Forecasting