Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Tom's Hardware

Relevant to discussions of compute governance and the scaling race among frontier AI labs; illustrates the enormous resource concentration occurring in AI development.

Metadata

Importance: 38/100news articlenews

Summary

Sam Altman has indicated OpenAI is targeting a compute cluster of 100 million GPUs, a scale that could cost up to $3 trillion, with the company projected to surpass 1 million GPUs by end of year. This signals an extraordinary acceleration in AI compute investment and infrastructure ambitions. The article contextualizes the staggering financial and logistical implications of such scaling.

Key Points

  • OpenAI is targeting a 100 million GPU compute cluster, representing an unprecedented scale of AI infrastructure investment.
  • The projected cost of such a build-out could reach $3 trillion, raising questions about funding, partnerships, and feasibility.
  • OpenAI is expected to cross 1 million GPUs in operation by end of the year as an intermediate milestone.
  • This level of compute scaling reflects the broader race among frontier AI labs to secure massive computational resources.
  • Such infrastructure ambitions have significant implications for energy consumption, supply chains, and AI governance.

1 FactBase fact citing this source

EntityPropertyValueAs Of
OpenAIGPU Count1 millionDec 2025
Resource ID: kb-83b11a731aeac0fd