Toward a Global Regime for Compute Governance - arXiv
paperAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Proposes a global governance framework for preventing dangerous AI development through compute restrictions, addressing a critical AI safety challenge by outlining technical, traceability, and regulatory mechanisms to control access to computational resources.
Paper Details
Metadata
Summary
This paper proposes a global 'Compute Pause Button'—a governance framework designed to prevent dangerous AI systems from being trained by restricting access to computational resources. The authors identify three intervention points (technical, traceability, and regulatory) organized within a Governance-Enforcement-Verification framework. Technical mechanisms include tamper-proof FLOP caps and model locking; traceability tools track chips and users across the supply chain; and regulatory mechanisms employ export controls and licensing schemes. Drawing parallels to nuclear non-proliferation and pandemic coordination, the paper argues that credible mechanisms already exist to implement this architecture and that action is urgently needed before critical capability thresholds are crossed.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Pause AI | Organization | 59.0 |
Cached Content Preview
\\addbibresource
references.bib
Report issue for preceding element
# Toward a Global Regime for Compute Governance: Building the Pause Button
Report issue for preceding element
Ananthi Al Ramiah
Independent
Raymond Koopmanschap
Independent
Josh Thorsteinson
Independent
Sadruddin Khan
Independent
Jim Zhou
Independent
Shafira Noh
Independent
Joep Meindertsma
PauseAI
Farhan Shafiq
AI Safety Camp Project Lead
Think Safe AI
Report issue for preceding element
(June 25, 2025)
###### Abstract
Report issue for preceding element
As AI capabilities rapidly advance, the risk of catastrophic harm from large-scale training runs is growing. Yet the compute infrastructure that enables such development remains largely unregulated. This paper proposes a concrete framework for a global _Compute Pause Button_: a governance system designed to prevent dangerously powerful AI systems from being trained by restricting access to computational resources. We identify three key intervention points—technical, traceability, and regulatory—and organize them within a Governance–Enforcement–Verification (GEV) framework to ensure rules are clear, violations are detectable, and compliance is independently verifiable. Technical mechanisms include tamper-proof FLOP caps, model locking, and offline licensing. Traceability tools track chips, components, and users across the compute supply chain. Regulatory mechanisms establish constraints through export controls, production caps, and licensing schemes. Unlike post-deployment oversight, this approach targets the material foundations of advanced AI development. Drawing from analogues ranging from nuclear non-proliferation to pandemic-era vaccine coordination, we demonstrate how compute can serve as a practical lever for global cooperation. While technical and political challenges remain, we argue that credible mechanisms already exist, and that the time to build this architecture is now, before the window for effective intervention closes.
Report issue for preceding element
## 1 Executive Summary
Report issue for preceding element
As AI capabilities accelerate, the risks associated with large-scale
training runs are growing rapidly. The most powerful AI systems —
those capable of triggering profound societal disruption or catastrophic
harm — require vast amounts of compute. These risks range from mass
disinformation, cyberattacks, and biological/chemical weapon
acceleration to economic destabilization, gradual loss of human agency,
and, in the extreme, existential risk through permanent loss of human
control. Yet current governance efforts overwhelmingly focus on AI use
and deployment, leaving the infrastructure that enables dangerous
training runs largely unregulated. According to a
recent estimate \\parencitekokotajloAI20272025, at current rates
of progress, frontier labs could cross critical danger thresholds as
early as 2027–2028. Without credible mechanisms to pause or prevent
threshold-exceeding compute, we face the
... (truncated, 98 KB total)3952a3a40236a8c1 | Stable ID: MTQ1NTI3Mm