Skip to content
Longterm Wiki
Back

Biology-Inspired AGI Timelines: The Trick That Never Works

web

Author

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

A classic Yudkowsky post frequently cited in AI forecasting debates; relevant when evaluating brain-emulation or neuromorphic arguments about AGI timelines and the reliability of compute-based forecasting methods.

Forum Post Details

Karma
160
Comments
151
Forum
lesswrong
Status
Curated
Forum Tags
AI TimelinesDialogue (format)Forecasting & PredictionHistoryTechnological ForecastingAI
Part of sequence: 2021 MIRI Conversations

Metadata

Importance: 62/100blog postcommentary

Summary

Eliezer Yudkowsky critiques the common practice of estimating AGI timelines by comparing brain computational requirements to current hardware, arguing this method has a long history of failure. He contends that biological analogies obscure critical unknowns about algorithmic complexity and architectural differences, making such forecasts unreliable. While AGI's eventual arrival may be anticipated, its precise timing cannot be derived from neuroscience-compute comparisons.

Key Points

  • Using biological brain metrics (e.g., neuron counts, synaptic operations) to estimate when AI will match human intelligence has repeatedly failed historically.
  • Biology-compute analogies ignore deep uncertainties about what algorithms are needed for AGI, not just raw computational substrate.
  • Architectural differences between biological brains and AI systems make direct comparisons epistemically suspect.
  • Yudkowsky distinguishes between predicting that AGI will eventually arrive (plausible) versus predicting when (nearly impossible via this method).
  • The post serves as a standing critique of a recurring forecasting error in AI timeline discussions, applicable across multiple decades of predictions.

Cited by 1 page

PageTypeQuality
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 20262381 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Biology-Inspired AGI Timelines: The Trick That Never Works — LessWrong 2021 MIRI Conversations AI Timelines Dialogue (format) Forecasting & Prediction History Technological Forecasting AI Curated 160 Biology-Inspired AGI Timelines: The Trick That Never Works by Eliezer Yudkowsky 1st Dec 2021 AI Alignment Forum 78 min read 151 160 Ω 50 - 1988 - Hans Moravec: Behold my book Mind Children. Within, I project that, in 2010 or thereabouts, we shall achieve strong AI. I am not calling it "Artificial General Intelligence" because this term will not be coined for another 15 years or so. Eliezer (who is not actually on the record as saying this, because the real Eliezer is, in this scenario, 8 years old; this version of Eliezer has all the meta-heuristics of Eliezer from 2021, but none of that Eliezer's anachronistic knowledge): Really? That sounds like a very difficult prediction to make correctly, since it is about the future, which is famously hard to predict. Imaginary Moravec: Sounds like a fully general counterargument to me. Eliezer: Well, it is, indeed, a fully general counterargument against futurism. Successfully predicting the unimaginably far future - that is, more than 2 or 3 years out, or sometimes less - is something that human beings seem to be quite bad at, by and large. Moravec: I predict that, 4 years from this day, in 1992, the Sun will rise in the east. Eliezer: Okay, let me qualify that. Humans seem to be quite bad at predicting the future whenever we need to predict anything at all new and unfamiliar, rather than the Sun continuing to rise every morning until it finally gets eaten. I'm not saying it's impossible to ever validly predict something novel! Why, even if that was impossible, how could I know it for sure? By extrapolating from my own personal inability to make predictions like that? Maybe I'm just bad at it myself. But any time somebody claims that some particular novel aspect of the far future is predictable, they justly have a significant burden of prior skepticism to overcome. More broadly, we should not expect a good futurist to give us a generally good picture of the future. We should expect a great futurist to single out a few rare narrow aspects of the future which are, somehow, exceptions to the usual rule about the future not being very predictable. I do agree with you, for example, that we shall at some point see Artificial General Intelligence. This seems like a rare predictable fact about the future, even though it is about a novel thing which has not happened before: we keep trying to crack this problem, we make progress albeit slowly, the problem must be solvable in principle because human brains solve it, eventually it will be solved; this is not a logical necessity, but it sure seems like the way to bet. "AGI eventually" is predictable in a way that it is not pr

... (truncated, 2381 KB total)
Resource ID: 8d2bfce96c23cd28 | Stable ID: MzE5MWJhNj