Skip to content
Longterm Wiki
Back

A deep critique of AI 2027's bad timeline models

blog

Author

titotal

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

A technical counterpoint to the influential AI 2027 forecasting project; particularly relevant for those evaluating the empirical basis of short-timeline AGI arguments and the reliability of quantitative AI forecasting models.

Forum Post Details

Karma
369
Comments
40
Forum
lesswrong
Forum Tags
AI TimelinesHas DiagramSimulationAI

Metadata

Importance: 62/100blog postcommentary

Summary

A computational physicist conducts a detailed technical review of the AI 2027 project's forecast code and methodology, arguing that its model predicting a 2027 AI singularity has fundamental flaws including insufficient empirical validation, problematic parameter estimates, and discrepancies between written methodology and actual code. The critique challenges the project's viral credibility despite endorsements from prominent figures in the AI safety space.

Key Points

  • The AI 2027 project's timeline forecast models are argued to have flawed fundamental structure, with the 'superexponential' growth curve being poorly justified empirically.
  • Significant discrepancies exist between the project's written methodology and its actual code implementation, raising concerns about transparency and rigor.
  • Parameter estimates used in the models are critiqued as insufficiently grounded, potentially biasing results toward accelerated timelines.
  • The author identifies at least six alternative narratives consistent with the same data, undermining the project's framing of its forecast as uniquely supported.
  • Despite viral success and high-profile endorsements, the critique argues the modeling foundation does not support high confidence in a 2027 singularity scenario.

Cited by 2 pages

PageTypeQuality
AI TimelinesConcept95.0
Eli LiflandPerson58.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
[A deep critique of AI 2027’s bad timeline models](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#)

47 min read

•

[Introduction:](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Introduction_)

•

[Part 1: Time horizons extension model](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Part_1__Time_horizons_extension_model)

•

[Overview of their forecast](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Overview_of_their_forecast)

•

[The “exponential” curve](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#The__exponential__curve)

•

[The “superexponential” curve](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#The__superexponential__curve)

•

[Conceptual reasons:](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Conceptual_reasons_)

•

[Argument 1: public vs internal:](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Argument_1__public_vs_internal__)

•

[Argument 2 difficulty gap:](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Argument_2_difficulty_gap_)

•

[Argument 3: recent progress:](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Argument_3__recent_progress_)

•

[Argument 4: infinite time horizons:](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Argument_4__infinite_time_horizons_)

•

[Intermediate speedups](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Intermediate_speedups)

•

[Have AI 2027 been sending out a false graph?](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Have_AI_2027_been_sending_out_a_false_graph_)

•

[Some skepticism about projection](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Some_skepticism_about_projection)

•

[Part 2: Benchmarks and gaps and beyond](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Part_2__Benchmarks_and_gaps_and_beyond)

•

[The benchmark part of “benchmark and gaps”:](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#The_benchmark_part_of__benchmark_and_gaps__)

•

[The time horizon part of the model](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#The_time_horizon_part_of_the_model)

•

[The gap model](https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#The_gap_model)

•

[What about 

... (truncated, 98 KB total)
Resource ID: 391c16ea677aad6d | Stable ID: NDc1YmE0Mz