Skip to content
Longterm Wiki
Back

AI 2027, 6 Months Later - FutureSearch

web

A mid-2025 retrospective by FutureSearch (co-authors of the AI 2027 timeline forecast) evaluating whether real-world lab behavior supports or undermines the conditions needed for a rapid AGI takeoff by 2027.

Metadata

Importance: 55/100blog postanalysis

Summary

FutureSearch reviews their AI 2027 timeline forecast six months after publication, assessing whether frontier labs are actually investing in internal R&D speedups necessary for a rapid AI takeoff. They find mixed evidence, with only Anthropic showing strong focus on coding agents for internal R&D, while other labs prioritize consumer products, weakening the case for a 2027-style AGI scenario.

Key Points

  • FutureSearch predicted Superhuman Coders would take ~3x longer than other AI Futures forecasters, aligning with Karpathy's skepticism about near-term coding agents.
  • Only Anthropic shows heavy internal R&D speedup focus via Claude Code; OpenAI, xAI, and Meta are prioritizing consumer products instead.
  • The 'commercial success may trump the race to AGI' thesis appears to be playing out, with most labs not dedicating resources toward AGI-accelerating internal AI use.
  • Anthropic's safety-first Responsible Scaling Policy makes it unlikely to drive an unchecked AI 2027-style takeoff scenario even if technically positioned to.
  • Google's size, slow pace, and regulatory exposure make it an unlikely candidate to drive a rapid AI takeoff scenario either.

Cited by 1 page

PageTypeQuality
FutureSearchOrganization50.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20267 KB
[← Back to Research](https://futuresearch.ai/research/)

> **Andrej Karpathy:** "There's some over-prediction going on in the industry..."
>
> **Dwarkesh Patel:** "What do you think will take a decade to accomplish? What are the bottlenecks?"
>
> **Andrej Karpathy:** "Actually making it work."
>
> — [Dwarkesh Podcast, Oct 17, 2025](https://www.dwarkesh.com/p/andrej-karpathy)

6 months ago, in April 2025, [Dwarkesh announced the _AI 2027_ project](https://www.dwarkesh.com/p/scott-daniel) on his podcast, interviewing authors Daniel Kokatajlo and Scott Alexander. Now, Karpathy justified his much longer timelines to Dwarkesh, on what's holding back coding agents, the first step in the _AI 2027_ timeline:

![FutureSearch co-authored the AI 2027 timeline forecast showing predictions for Superhuman Coders](https://futuresearch.ai/ai-2027-6-months-later/timelines.webp)

[FutureSearch](https://ai-2027.com/research/timelines-forecast) co-authored the AI 2027 timeline forecast. We predicted that Superhuman Coders would take about 3x longer than the other [AI Futures](https://ai-futures.org/) forecasters predicted.

## The Not-So-Fast Thesis

For AI experts, Karpathy's view is a better counterargument to short timelines than [ours](https://futuresearch.ai/ai-2027). But for non-AI-experts, we think the practical considerations we raised are worth reflecting on with 6 more months of evidence. As forecasters, this is more of an "outside view" - regardless of how exactly AI improves, what problems might slow down an R&D-based takeoff scenario?

One key point was: "Commercial Success May Trump the Race to AGI". We wrote:

> So far OpenAI, the leading contender to be the company in the _AI 2027_ story, has spoken more about consumer revenue growth and less about transformative AI.
>
> This piece requires at least one frontier lab to dedicate the majority of their resources towards building AI for their own internal use. We have reason to doubt that many of them will.

An AI takeoff as soon as 2027, in the scenario, depends on a stupendous capital investment in running a vast number of expensive AI agents to do AI research inside the companies. So are they actually preparing for this, and trying it?

## Are AI Companies Focusing on R&D Speedups?

So, since April 2025, what have we learned about frontier labs investing their AI into superhuman coding to accelerate their internal rate of R&D? Here is a quick assessment:

- **Anthropic**: Heavy focus on R&D speedup via coding agents, notably Claude Code being used extensively internally.
- **OpenAI**: Moved strongly towards consumer, e.g. with the Sora app, shopping features. Did build Codex, seemingly to compete with Claude Code.
- **Google Deep Mind**: No change, similar emphasis on fundamental research, always invested heavily in developer productivity.
- **xAI**: Focusing on Grok for the X algorithm, sexy companions. Grok 4 is not a top tier coding model and likely not speeding up their R&D at all.
- **Meta**: Sex

... (truncated, 7 KB total)
Resource ID: 5ecd97254f257e65 | Stable ID: ZTZhMDk2YW