Eli Lifland
eli-lifland (E435)← Back to pagePath: /knowledge-base/people/eli-lifland/
Page Metadata
{
"id": "eli-lifland",
"numericId": null,
"path": "/knowledge-base/people/eli-lifland/",
"filePath": "knowledge-base/people/eli-lifland.mdx",
"title": "Eli Lifland",
"quality": 58,
"importance": 72,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-02-01",
"llmSummary": "Biographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his forecasting track record, the AI Futures timelines model, and his contributions to AI safety discourse.",
"structuredSummary": null,
"description": "Forecaster and AI safety researcher specializing in AGI timelines forecasting, scenario planning, and AI governance. Ranks #1 on the RAND Forecasting Initiative all-time leaderboard and co-authored the AI 2027 scenario forecast.",
"ratings": {
"novelty": 4,
"rigor": 6,
"actionability": 5,
"completeness": 8
},
"category": "people",
"subcategory": null,
"clusters": [
"ai-safety",
"epistemics"
],
"metrics": {
"wordCount": 1427,
"tableCount": 2,
"diagramCount": 0,
"internalLinks": 14,
"externalLinks": 41,
"footnoteCount": 40,
"bulletRatio": 0.15,
"sectionCount": 10,
"hasOverview": true,
"structuralScore": 12
},
"suggestedQuality": 80,
"updateFrequency": 45,
"evergreen": true,
"wordCount": 1427,
"unconvertedLinks": [
{
"text": "Samotsvety Track Record",
"url": "https://samotsvety.org/track-record/",
"resourceId": "c7b435dfad2f7ca2",
"resourceTitle": "Samotsvety Track Record"
},
{
"text": "Samotsvety Track Record",
"url": "https://samotsvety.org/track-record/",
"resourceId": "c7b435dfad2f7ca2",
"resourceTitle": "Samotsvety Track Record"
},
{
"text": "Samotsvety Track Record",
"url": "https://samotsvety.org/track-record/",
"resourceId": "c7b435dfad2f7ca2",
"resourceTitle": "Samotsvety Track Record"
},
{
"text": "Samotsvety Track Record",
"url": "https://samotsvety.org/track-record/",
"resourceId": "c7b435dfad2f7ca2",
"resourceTitle": "Samotsvety Track Record"
},
{
"text": "Samotsvety Track Record",
"url": "https://samotsvety.org/track-record/",
"resourceId": "c7b435dfad2f7ca2",
"resourceTitle": "Samotsvety Track Record"
}
],
"unconvertedLinkCount": 5,
"convertedLinkCount": 0,
"backlinkCount": 0,
"redundancy": {
"maxSimilarity": 17,
"similarPages": [
{
"id": "ai-futures-project",
"title": "AI Futures Project",
"path": "/knowledge-base/organizations/ai-futures-project/",
"similarity": 17
},
{
"id": "samotsvety",
"title": "Samotsvety",
"path": "/knowledge-base/organizations/samotsvety/",
"similarity": 13
},
{
"id": "futuresearch",
"title": "FutureSearch",
"path": "/knowledge-base/organizations/futuresearch/",
"similarity": 12
},
{
"id": "metaculus",
"title": "Metaculus",
"path": "/knowledge-base/organizations/metaculus/",
"similarity": 12
},
{
"id": "nuno-sempere",
"title": "Nuño Sempere",
"path": "/knowledge-base/people/nuno-sempere/",
"similarity": 12
}
]
}
}Entity Data
{
"id": "eli-lifland",
"type": "person",
"title": "Eli Lifland",
"description": "AI researcher, forecaster, and entrepreneur specializing in AGI timelines forecasting, scenario planning, and AI governance. Ranks #1 on the RAND Forecasting Initiative all-time leaderboard and co-authored the influential AI 2027 scenario forecast.",
"tags": [
"forecasting",
"agi-timelines",
"scenario-planning",
"samotsvety",
"ai-governance"
],
"relatedEntries": [
{
"id": "ai-futures-project",
"type": "organization"
},
{
"id": "samotsvety",
"type": "organization"
},
{
"id": "metaculus",
"type": "organization"
},
{
"id": "open-philanthropy",
"type": "organization"
},
{
"id": "lesswrong",
"type": "organization"
}
],
"sources": [],
"lastUpdated": "2026-02",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
No external links
Backlinks (0)
No backlinks
Frontmatter
{
"title": "Eli Lifland",
"description": "Forecaster and AI safety researcher specializing in AGI timelines forecasting, scenario planning, and AI governance. Ranks #1 on the RAND Forecasting Initiative all-time leaderboard and co-authored the AI 2027 scenario forecast.",
"importance": 72,
"lastEdited": "2026-02-01",
"update_frequency": 45,
"sidebar": {
"order": 50
},
"ratings": {
"novelty": 4,
"rigor": 6,
"actionability": 5,
"completeness": 8
},
"quality": 58,
"llmSummary": "Biographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his forecasting track record, the AI Futures timelines model, and his contributions to AI safety discourse.",
"clusters": [
"ai-safety",
"epistemics"
],
"entityType": "person"
}Raw MDX Source
---
title: Eli Lifland
description: "Forecaster and AI safety researcher specializing in AGI timelines
forecasting, scenario planning, and AI governance. Ranks #1 on the RAND
Forecasting Initiative all-time leaderboard and co-authored the AI 2027
scenario forecast."
importance: 72
lastEdited: "2026-02-01"
update_frequency: 45
sidebar:
order: 50
ratings:
novelty: 4
rigor: 6
actionability: 5
completeness: 8
quality: 58
llmSummary: Biographical profile of Eli Lifland, a top-ranked forecaster and AI
safety researcher who co-authored the AI 2027 scenario forecast and co-founded
the AI Futures Project. The page documents his forecasting track record,
the AI Futures timelines model, and his contributions to AI safety discourse.
clusters: ["ai-safety","epistemics"]
entityType: person
---
import {EntityLink, KeyPeople, KeyQuestions, Section} from '@components/wiki';
## Quick Assessment
| Attribute | Assessment |
|-----------|------------|
| **Primary Focus** | AGI forecasting, scenario planning, <EntityLink id="E608">AI governance</EntityLink> |
| **Key Achievements** | #1 RAND Forecasting Initiative all-time leaderboard; co-authored AI 2027 scenario forecast; co-lead of <EntityLink id="E560">Samotsvety</EntityLink> forecasting team |
| **Current Roles** | Co-founder and researcher at <EntityLink id="E511">AI Futures Project</EntityLink>; co-founder/advisor at Sage; guest fund manager at Long Term Future Fund |
| **Educational Background** | Computer science and economics degrees from University of Virginia |
| **Notable Contributions** | AI 2027 scenario forecast; AI Futures timelines model; top-ranked forecasting track record |
## Key Links
| Source | Link |
|--------|------|
| Official Website | [elilifland.com](https://www.elilifland.com) |
## Overview
**Eli Lifland** is a forecaster and AI safety researcher who ranks #1 on the RAND Forecasting Initiative all-time leaderboard. He co-leads the <EntityLink id="E560">Samotsvety</EntityLink> forecasting team, which placed first in the CSET-Foretell/INFER competition in 2020, 2021, and 2022.[^1] His work focuses on <EntityLink id="E399">AGI timeline</EntityLink> forecasting, scenario planning, and AI safety.
Lifland co-founded the <EntityLink id="E511">AI Futures Project</EntityLink> alongside Daniel Kokotajlo and Thomas Larsen, and co-authored **AI 2027**, a detailed scenario forecast exploring potential <EntityLink id="E604">AGI development</EntityLink> trajectories.[^2][^3] The project, with contributions from Scott Alexander and Romeo Dean, provides a concrete scenario for how superhuman AI capabilities might emerge, including geopolitical tensions, technical breakthroughs, and alignment challenges.
Lifland also co-founded and advises Sage, an organization building interactive AI explainers and forecasting tools, and serves as a guest fund manager at the Long Term Future Fund.[^4] He previously worked on <EntityLink id="E526">Elicit</EntityLink> at Ought and co-created TextAttack, a Python framework for adversarial attacks in natural language processing.[^5]
## AI Futures Project and AI 2027
Lifland is a co-founder and researcher at the **AI Futures Project**, a 501(c)(3) organization focused on AGI forecasting, scenario planning, and policy engagement.[^6] The organization was co-founded with Daniel Kokotajlo (Executive Director, former <EntityLink id="E218">OpenAI</EntityLink> researcher) and Thomas Larsen (founder of the Center for AI Policy).[^7]
The project's flagship output is **AI 2027**, a detailed scenario forecast released in April 2025 exploring how superintelligence might emerge.[^8] The scenario was co-authored with Scott Alexander (who primarily assisted with rewriting) and Romeo Dean (who contributed supplements on compute and security considerations).[^9]
The AI 2027 forecast presents a concrete narrative of AI development including:
- Increasingly capable AI agents automating significant portions of AI research and development[^10]
- Geopolitical tensions, particularly a US-China AI race, influencing safety decisions and deployment timelines[^11]
- Alignment challenges, including exploration of safer model series using chain-of-thought reasoning to address failures[^12]
- Economic impacts, including widespread job displacement[^13]
The project received significant attention and has been discussed in venues including Lawfare Media, <EntityLink id="E426">ControlAI</EntityLink>, and a CEPR webinar.[^14][^15][^16]
## AI Futures Timelines Model
The AI Futures Project maintains a quantitative timelines model that generates probability distributions for key AGI milestones such as Automated Coder (AC) and superintelligence (ASI). The model incorporates benchmark tracking, compute availability, algorithmic progress, and other inputs to produce forecasts that team members then adjust based on their individual judgment.[^17]
Lifland's personal AGI timeline estimates have shifted as new evidence has emerged. His median TED-AI (a general intelligence milestone) forecast has followed this trajectory:[^18]
- 2021: ~2060
- July 2022: ~2050
- January 2024: ~2038
- Mid-2024: ~2035
- December 2024: ~2032
- April 2025: ~2031
- July 2025: ~2033
- January 2026: ~2035
The AI Futures Project has emphasized that the AI 2027 scenario was never intended as a confident prediction that AGI would arrive in 2027, and that all team members maintain high uncertainty about when AGI and ASI will be built.[^19] The December 2025 model update predicted 3-5 years longer timelines to full coding automation compared to the April 2025 AI 2027 forecast, attributed primarily to more conservative modeling of pre-automation AI R&D speedups and recognition of potential data bottlenecks.[^20]
## Forecasting Track Record
Lifland ranks #1 on the RAND Forecasting Initiative (CSET-Foretell/INFER) all-time leaderboard.[^21] On GJOpen, his Brier score of 0.23 outperforms the median of 0.301 (ratio 0.76), and he secured 2nd place in the <EntityLink id="E199">Metaculus</EntityLink> Economist 2021 tournament and 1st in the Salk Tournament as of September 2022.[^22]
As co-lead of the **Samotsvety Forecasting team** (approximately 15 forecasters), Lifland helped guide the team to first-place finishes in the INFER competition in 2020, 2021, and 2022.[^23] In 2020, Samotsvety placed 1st with a relative score of -0.912 compared to -0.062 for 2nd place. In 2021, they achieved 1st with a relative score of -3.259 compared to -0.889 for 2nd place. Samotsvety holds positions 1, 2, 3, and 4 in INFER's all-time ranking, with some members achieving Superforecaster status.[^24]
The team has produced public forecasts on critical topics including AI existential risk and nuclear risk.[^25]
## Sage and AI Digest
Lifland co-founded **Sage**, an organization focused on building interactive AI explainers and forecasting tools.[^26] One of Sage's key projects is **AI Digest**, which received \$550,000 from <EntityLink id="E552">Coefficient Giving</EntityLink> for its work, with an additional \$550,000 for forecasting projects.[^27] The organization aims to make AI developments more accessible to broader audiences through interactive tools and clear explanations.
## Role in the AI Safety Community
Lifland is active in the AI safety and alignment communities, particularly through <EntityLink id="E538">LessWrong</EntityLink> and the Effective Altruism Forum. He serves as a mentor in the MATS Program (focusing on Strategy & Forecasting, Policy & Governance streams).[^28] He has also been featured in the documentary "Making God," which explores AGI risks.[^29]
Lifland has taken the Giving What We Can Pledge, committing to donate 10% of his lifetime income to effective charities.[^30]
## Criticisms and Controversies
Lifland's work, particularly the AI 2027 timelines model, has faced methodological criticism from community members. In a detailed critique posted to <EntityLink id="E538">LessWrong</EntityLink>, the EA Forum, and Substack, forecaster "titotal" described the model's fundamental structure as "highly questionable," with little empirical validation and poor justification for parameters like superexponential time horizon growth curves.[^31] Titotal argued that models need strong conceptual and empirical justifications before influencing major decisions, characterizing AI 2027 as resembling a "shoddy toy model stapled to a sci-fi short story" disguised as rigorous research.[^32]
Critics have also raised concerns about philosophical overconfidence, warning that popularizing flawed models could lead people to make significant life decisions based on shaky forecasts.[^33] Others counter that inaction on short timelines could be costlier if the forecasts prove accurate.[^34]
Lifland responded to these criticisms by acknowledging errors and reviewing titotal's critique for factual accuracy. He agreed to changes in the model write-up and paid \$500 bounties to both titotal and another critic, Peter Johnson, for identifying issues.[^35][^36] The team released a detailed response explaining their reasoning more thoroughly, including their justification for the model's assumptions.[^37]
Other criticisms include:
- **Lack of skeptic engagement**: Some community members felt AI 2027 did not sufficiently address skeptical frameworks or justify its models against competing views[^38]
- **Unverifiable predictions**: Concerns that some predictions are difficult to validate empirically[^39]
Lifland has been forthright about forecast misses and has regularly updated his timelines as new evidence emerges.[^40] No major personal controversies or ethical issues have been documented beyond these methodological debates.
## Sources
[^1]: [Samotsvety Track Record](https://samotsvety.org/track-record/)
[^2]: [AI 2027 About Page](https://ai-2027.com/about)
[^3]: [Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027](https://www.lawfaremedia.org/article/lawfare-daily--daniel-kokotajlo-and-eli-lifland-on-their-ai-2027-report)
[^4]: [Eli Lifland Personal Website](https://www.elilifland.com)
[^5]: [Eli Lifland Google Scholar Profile](https://scholar.google.com/citations?user=Q33DXbEAAAAJ&hl=en)
[^6]: [AI Futures Project About Page](https://ai-futures.org/about/)
[^7]: [AI Futures Project About Page](https://ai-futures.org/about/)
[^8]: [AI 2027 About Page](https://ai-2027.com/about)
[^9]: [AI 2027 About Page](https://ai-2027.com/about)
[^10]: [AI 2027 Website](https://ai-2027.com)
[^11]: [ControlAI Newsletter - Future of AI Special Edition](https://controlai.news/p/special-edition-the-future-of-ai)
[^12]: [AI 2027 Website](https://ai-2027.com)
[^13]: [AI 2027 Website](https://ai-2027.com)
[^14]: [Lawfare Media - Daniel Kokotajlo and Eli Lifland on AI 2027](https://www.lawfaremedia.org/article/lawfare-daily--daniel-kokotajlo-and-eli-lifland-on-their-ai-2027-report)
[^15]: [ControlAI Newsletter - Future of AI Special Edition](https://controlai.news/p/special-edition-the-future-of-ai)
[^16]: [CEPR Webinar - AI 2027 Scenario Forecast](https://cepr.org/multimedia/cepr-webinar-series-economics-artificial-intelligence-ai-2027-scenario-forecast)
[^17]: [AI Futures Blog - Clarifying Timelines Forecasts](https://blog.ai-futures.org/p/clarifying-how-our-ai-timelines-forecasts)
[^18]: [AI Futures Blog - Clarifying Timelines Forecasts](https://blog.ai-futures.org/p/clarifying-how-our-ai-timelines-forecasts)
[^19]: [AI Futures Blog - Clarifying Timelines Forecasts](https://blog.ai-futures.org/p/clarifying-how-our-ai-timelines-forecasts)
[^20]: [Marketing AI Institute - Moving Back AGI Timeline](https://www.marketingaiinstitute.com/blog/moving-back-agi-timeline)
[^21]: [Samotsvety Track Record](https://samotsvety.org/track-record/)
[^22]: [Samotsvety Track Record](https://samotsvety.org/track-record/)
[^23]: [Samotsvety Track Record](https://samotsvety.org/track-record/)
[^24]: [Samotsvety Track Record](https://samotsvety.org/track-record/)
[^25]: [EA Forum - Samotsvety's AI Risk Forecasts](https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts)
[^26]: [Eli Lifland Personal Website](https://www.elilifland.com)
[^27]: [Manifund - AI Digest Project](https://manifund.org/projects/ai-digest)
[^28]: [MATS Program - Eli Lifland Mentor Profile](https://www.matsprogram.org/mentor/lifland)
[^29]: [EA Forum - Making God Documentary](https://forum.effectivealtruism.org/posts/gsKQknEikbERo4Hih/creating-making-god-a-feature-documentary-on-risks-from-agi)
[^30]: [Eli Lifland Personal Website](https://www.elilifland.com)
[^31]: [LessWrong - Deep Critique of AI 2027 Timeline Models](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models)
[^32]: [LessWrong - Deep Critique of AI 2027 Timeline Models](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models)
[^33]: [EA Forum - Practical Value of Flawed Models](https://forum.effectivealtruism.org/posts/fKx6DkWfzJXoycWhE/the-practical-value-of-flawed-models-a-response-to-titotal-s)
[^34]: [EA Forum - Practical Value of Flawed Models](https://forum.effectivealtruism.org/posts/fKx6DkWfzJXoycWhE/the-practical-value-of-flawed-models-a-response-to-titotal-s)
[^35]: [AI Futures Notes Substack - Response to Titotal Critique](https://aifuturesnotes.substack.com/p/response-to-titotals-critique-of)
[^36]: [EA Forum - Practical Value of Flawed Models](https://forum.effectivealtruism.org/posts/fKx6DkWfzJXoycWhE/the-practical-value-of-flawed-models-a-response-to-titotal-s)
[^37]: [AI Futures Notes Substack - Response to Titotal Critique](https://aifuturesnotes.substack.com/p/response-to-titotals-critique-of)
[^38]: [ControlAI Newsletter - Future of AI Special Edition](https://controlai.news/p/special-edition-the-future-of-ai)
[^39]: [AI 2027 Website](https://ai-2027.com)
[^40]: [AI Futures Blog - Clarifying Timelines Forecasts](https://blog.ai-futures.org/p/clarifying-how-our-ai-timelines-forecasts)