Back
Biological Anchors: A Trick That Might Or Might Not Work
webastralcodexten.com·astralcodexten.com/p/biological-anchors-a-trick-that-might
This Astral Codex Ten post by Scott Alexander provides an accessible critical analysis of Ajeya Cotra's influential Biological Anchors report on transformative AI timelines, useful for understanding debates around AI forecasting methodology.
Metadata
Importance: 62/100blog postcommentary
Summary
Scott Alexander analyzes Ajeya Cotra's 'Biological Anchors' framework for forecasting transformative AI timelines, examining its methodology of anchoring compute requirements for human-level AI to estimates of the brain's computational capacity. The post evaluates the strengths and weaknesses of this approach, including uncertainties in the underlying biological estimates and the assumptions needed to translate them into AI training compute requirements.
Key Points
- •The Biological Anchors report attempts to forecast when transformative AI might arrive by estimating how much compute is needed to match the human brain's processing capacity.
- •Scott Alexander critically examines whether anchoring AI timelines to biological neural computation is a valid or misleading methodology.
- •Key uncertainties include disagreements about the brain's effective FLOP/s and whether current deep learning paradigms would actually benefit from brain-scale compute.
- •The post explores whether the framework produces calibrated probability distributions or whether it gives false precision to highly uncertain estimates.
- •Despite limitations, the biological anchors approach is one of the more systematic attempts to ground AI timeline forecasting in empirical reference points.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
Cached Content Preview
HTTP 200Fetched Feb 26, 2026441 KB
Biological Anchors: A Trick That Might Or Might Not Work Astral Codex Ten Subscribe Sign in Biological Anchors: A Trick That Might Or Might Not Work ... Feb 23, 2022 141 369 4 Share Introduction I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents . Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on. The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it’s very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is "informal" - but it’s 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of “transformative AI” by 2031, a 50% chance by 2052, and an almost 80% chance by 2100. Eliezer rejects their methodology and expects AI earlier (he doesn’t offer many numbers, but here he gives Bryan Caplan 50-50 odds on 2030, albeit not totally seriously ). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works , sparking a bunch of arguments and counterarguments and even more long essays. There's a small cottage industry of summarizing the report already, eg OpenPhil CEO Holden Karnofsky's article and Alignment Newsletter editor Rohin Shah's comment . I've drawn from both for my much-inferior attempt. Part I: The Cotra Report Ajeya Cotra is a senior research analyst at OpenPhil. She's assisted by her fiancee Paul Christiano (compsci PhD, OpenAI veteran, runs an AI alignment nonprofit) and to a lesser degree by other leading lights. Although not everyone involved has formal ML training, if you care a lot about whether efforts are “establishment” or “contrarian”, this one is probably more establishment. The report asks when will we first get "transformative AI" (ie AI which produces a transition as impressive as the Industrial Revolution; probably this will require it to be about as smart as humans). Its methodology is: 1. Figure out how much inferential computation the human brain does. 2. Try to figure out how much training computation it would take, right now, to get a neural net that does the same amount of inferential computation. Get some mind-bogglingly large number. 3. Adjust for "algorithmic progress", ie maybe in the future neural nets will be better at using computational resources efficiently. Get some number which, realistically, is still mind-bogglingly large. 4. Probably if you wanted that mind-bogglingly large amount of computation, it would take some mind-bogglingly large amount of money. But computation is getting cheaper eve
... (truncated, 441 KB total)Resource ID:
c2f7836267607b52 | Stable ID: MzIyYzJlNW