Longterm Wiki
HistoryData
Page StatusAI Transition Model
1 backlinks
2
Structure2/15
00000%0%
Summary

This page contains only React component imports with no actual content about compute capabilities or their role in AI risk. It is a technical stub awaiting data population.

Issues1
StructureNo tables or diagrams - consider adding visual content

Compute (AI Capabilities)

Parameter

Compute (AI Capabilities)

This page contains only React component imports with no actual content about compute capabilities or their role in AI risk. It is a technical stub awaiting data population.

1 backlinks

Compute refers to the hardware resources required to train and run AI systems, including GPUs, TPUs, and specialized AI accelerators. The current generation of frontier AI models requires extraordinary amounts of computational power—training runs cost tens to hundreds of millions of dollars in compute alone. The significance of compute for AI governance stems from several unique properties: it is measurable (training runs can be quantified in FLOPs), concentrated (the global semiconductor supply chain depends on chokepoints like ASML, TSMC, and NVIDIA), and physical (unlike algorithms that can be copied infinitely, hardware must be manufactured and shipped).


No entity found for ID "tmc-E309". Ensure the entity exists in ai-transition-model.yaml.

Related Pages

Top Related Pages

Policy

Compute Monitoring

Concepts

Scientific Research Capabilities