Skip to content
Longterm Wiki
Back

Author

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

This is an early public draft of what became Cotra's influential 'biological anchors' TAI timelines report, widely cited in AI safety and forecasting communities; the final version was published by Open Philanthropy in 2020.

Forum Post Details

Karma
215
Comments
56
Forum
lesswrong
Forum Tags
AI TimelinesAI

Metadata

Importance: 82/100blog postanalysis

Summary

Ajeya Cotra presents a quantitative framework for forecasting timelines to transformative AI (TAI), anchoring estimates to the computational requirements of the human brain and projecting when available compute will exceed training requirements. The model incorporates algorithmic progress trends and compute cost trajectories, with a Python implementation integrated with Ought's Elicit platform for community input on key parameters. This draft represents one of the most rigorous public attempts to systematically estimate TAI timelines using biological anchors.

Key Points

  • Defines transformative AI (TAI) as AI with economic/scientific impact comparable to the Industrial Revolution, providing a concrete forecasting target.
  • Anchors computational estimates to the human brain's training requirements, then models when AI training compute will plausibly match or exceed this threshold.
  • Incorporates trends in algorithmic efficiency improvements and declining compute costs to project timeline probability distributions.
  • Includes an open Python model integrated with Elicit for community forecasting on key uncertain parameters, enabling collaborative refinement.
  • Explicitly preliminary and does not represent an official Open Philanthropy position, inviting critique and iteration from the broader research community.

Cited by 1 page

PageTypeQuality
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
![Background Image](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/splashArtImagePrompta%20vast%20maze%20with%20a%20person%20at%20the%20start%2C%20plotting%20their%20route/f5xqc7i40wqlfgvn6mjz)

[Best of LessWrong 2020](https://www.lesswrong.com/bestoflesswrong?year=2020&category=all)

[AI Timelines](https://www.lesswrong.com/w/ai-timelines)[AI](https://www.lesswrong.com/w/ai) [Frontpage](https://www.lesswrong.com/posts/5conQhfa4rgb4SaWx/site-guide-personal-blogposts-vs-frontpage-posts)

# 215

1010

Speed 1XSubscribe

Draft report on AI timelines

00:00 / 01:34

Speed 1x

[Apple](https://podcasts.apple.com/us/podcast/lesswrong-30+-karma/id1698192712)

[Spotify](https://open.spotify.com/show/3teJ17Kn2xs9pMMRcMAWuQ)

[RSS](https://feeds.type3.audio/lesswrong--30-karma.rss)

0.5x5x

1x

1010

00:00 / 01:34

Speed 1XSubscribe

[Apple](https://podcasts.apple.com/us/podcast/lesswrong-30+-karma/id1698192712)

[Spotify](https://open.spotify.com/show/3teJ17Kn2xs9pMMRcMAWuQ)

[RSS](https://feeds.type3.audio/lesswrong--30-karma.rss)

0.5x5x

1x

# [Draft report on AItimelines](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)

by [Ajeya Cotra](https://www.lesswrong.com/users/ajeya-cotra?from=post_header)

18th Sep 2020

[AI Alignment Forum](https://alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)

1 min read

[56](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines#comments)

# 215

# Ω 75

[Review by\\
\\
Daniel Kokotajlo](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines#o3k4znyxFSnpXqrdL)

Hi all, I've been working on some AI forecasting research and have prepared a draft report on timelines to [transformative AI](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#Sec1). I would love feedback from this community, so I've made the report viewable in a Google Drive folder [here](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP).

With that said, most of my focus so far has been on the high-level structure of the framework, so the particular quantitative estimates are very much in flux and many input parameters aren't pinned down well -- I wrote the bulk of this report before July and have received feedback since then that I haven't fully incorporated yet. I'd prefer if people didn't share it widely in a low-bandwidth way (e.g., just posting key graphics on Facebook or Twitter) since the conclusions don't reflect Open Phil's "institutional view" yet, and there may well be some errors in the report.

The report includes a quantitative model written in Python. [Ought](https://ought.org/) has worked with me to integrate their forecasting platform [Elicit](https://elicit.ought.org/) into the model so that you can see other people's forecasts for various parameters. If you have questions or feedback about the Elicit integration, fee

... (truncated, 98 KB total)
Resource ID: cd9f1d771d9a34c5 | Stable ID: MWJlODI0Yj