Longterm Wiki
Data
Page StatusAI Transition Model

Compute (AI Capabilities)

Entry

Compute

Related
Approaches
Compute Governance
ai-transition-model-factors
AI Capabilities
ai-transition-model-metrics
Compute & Hardware
ai-transition-model-parameters
Racing Intensity

Compute refers to the hardware resources required to train and run AI systems, including GPUs, TPUs, and specialized AI accelerators. The current generation of frontier AI models requires extraordinary amounts of computational power—training runs cost tens to hundreds of millions of dollars in compute alone. The significance of compute for AI governance stems from several unique properties: it is measurable (training runs can be quantified in FLOPs), concentrated (the global semiconductor supply chain depends on chokepoints like ASML, TSMC, and NVIDIA), and physical (unlike algorithms that can be copied infinitely, hardware must be manufactured and shipped).


No entity found for ID "tmc-E309". Ensure the entity exists in ai-transition-model.yaml.

Related Pages

Top Related Pages

Risks

Compute ConcentrationMultipolar Trap (AI Development)

Approaches

AI-Augmented ForecastingAI Safety CasesAI Governance Coordination Technologies

Policy

Voluntary AI Safety Commitments

Organizations

US AI Safety InstituteUK AI Safety Institute

Transition Model

AlgorithmsLab Behavior

Analysis

OpenAI Foundation Governance ParadoxLong-Term Benefit Trust (Anthropic)

Concepts

Governance-Focused Worldview

Models

Projecting Compute SpendingInternational AI Coordination Game Model

Key Debates

Open vs Closed Source AIGovernment Regulation vs Industry Self-Governance

Labs

GovAI

People

Yoshua BengioStuart Russell