Back
Computing Power and the Governance of AI
paperAuthors
Sastry, Girish·Heim, Lennart·Belfield, Haydn·Anderljung, Markus·Brundage, Miles·Hazell, Julian·O'Keefe, Cullen·Hadfield, Gillian K.·Ngo, Richard·Pilz, Konstantin·Gor, George·Bluemke, Emma·Shoker, Sarah·Egan, Janet·Trager, Robert F.·Avin, Shahar·Weller, Adrian·Bengio, Yoshua·Coyle, Diane
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Analyzes compute governance as a mechanism for AI safety and oversight, exploring how computing power constraints can enable visibility, resource allocation, and enforcement in AI development—a key governance approach for managing advanced AI systems.
Paper Details
Citations
0
4 influential
Year
2024
Methodology
book-chapter
Categories
Principles of AI Governance and Model Risk Managem
Metadata
arxiv preprintanalysis
Summary
The paper explores how computing power can be used to enhance AI governance through visibility, resource allocation, and enforcement mechanisms. It examines the technical and policy opportunities of compute governance while also highlighting potential risks.
Key Points
- •Compute is a unique and trackable input to AI development with high governance potential
- •Compute governance can enhance visibility, allocation, and enforcement of AI policy objectives
- •Careful implementation is critical to avoid unintended negative consequences
Review
This comprehensive paper presents compute governance as a promising approach to managing AI development. The authors argue that computing power offers a distinctive opportunity for intervention due to its detectability, excludability, quantifiability, and concentrated supply chain. Unlike other AI inputs like data and algorithms, compute is a tangible resource that can be monitored, controlled, and regulated. The paper systematically explores how compute governance can enhance three key governance capacities: increasing visibility into AI capabilities, steering AI progress through resource allocation, and enforcing prohibitions against reckless AI development. The authors propose numerous policy mechanisms while maintaining a balanced perspective, acknowledging potential risks such as privacy concerns, centralization of power, and unintended economic consequences. They emphasize that the design and implementation of compute governance strategies are crucial, and recommend implementing safeguards to mitigate potential negative impacts.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Acceleration Tradeoff Model | Analysis | 50.0 |
| Governance-Focused Worldview | Concept | 67.0 |
1 FactBase fact citing this source
| Entity | Property | Value | As Of |
|---|---|---|---|
| GovAI | publication | Computing Power and the Governance of Artificial Intelligence — argues compute is the most governable AI pillar, proposes international monitoring mechanisms | Feb 2024 |
Cached Content Preview
HTTP 200Fetched Apr 4, 202698 KB
[2402.08797] Computing Power and the Governance of Artificial Intelligence Each author contributed ideas and/or writing to the paper. However, being an author does not imply agreement with every claim made in the paper, nor does it represent an endorsement from any author’s respective organization. ∗ Denotes primary authors, who contributed most significantly to the direction and content of the paper. Both primary authors and other authors are listed in approximately descending order of contribution. † Indicates the corresponding authors: Girish Sastry (girish@openai.com), Lennart Heim (lennart.heim@governance.ai), and Haydn Belfield (hb492@cam.ac.uk). Figures can be accessed at https://github.com/lheim/CPGAI-Figures.
Computing Power and the
Governance of Artificial Intelligence 0 0 footnotetext: Each author contributed ideas and/or writing to the paper. However, being an author does not imply agreement with every claim made in the paper, nor does it represent an endorsement from any author’s respective organization. 0 0 footnotetext: ∗ Denotes primary authors, who contributed most significantly to the direction and content of the paper. Both primary authors and other authors are listed in approximately descending order of contribution. 0 0 footnotetext: † Indicates the corresponding authors: Girish Sastry (girish@openai.com), Lennart Heim (lennart.heim@governance.ai), and Haydn Belfield (hb492@cam.ac.uk). Figures can be accessed at https://github.com/lheim/CPGAI-Figures .
Girish Sastry, ∗†1 Lennart Heim, ∗†2 Haydn Belfield, ∗†3
Markus Anderljung, ∗2 Miles Brundage, ∗1 Julian Hazell, ∗2,4 Cullen O’Keefe, ∗1,5
Gillian K. Hadfield, ∗6,7
Richard Ngo, 1 Konstantin Pilz, 8 George Gor, 9
Emma Bluemke, 2
Sarah Shoker, 1 Janet Egan, 10 Robert F. Trager, 11
Shahar Avin, 12
Adrian Weller, 13 Yoshua Bengio, 14 Diane Coyle 15
1 OpenAI, 2 Centre for the Governance of AI (GovAI),
3 Leverhulme Centre for the Future of Intelligence, Uni. of Cambridge,
4 Oxford Internet Institute, 5 Institute for Law & AI, 6 University of Toronto
7 Vector Institute for AI, 8 Georgetown University, 9 ILINA Program, 10 Harvard Kennedy School,
11 AI Governance Institute, Uni. of Oxford, 12 Centre for the Study of Existential Risk, Uni. of Cambridge,
13 Uni. of Cambridge, 14 Uni. of Montreal / Mila, 15 Bennett Institute, Uni. of Cambridge
(February 14, 2024)
Computing power, or "compute," is crucial for the development and deployment of artificial intelligence (AI) capabilities. As a result, governments and companies have started to leverage compute as a means to govern AI. For example, governments are investing in domestic compute capacity, controlling the flow of compute to competing countries, and subsidizing compute access to certain sectors. However, these efforts only scratch the surface of how compute can be used to govern AI development and deployment. Relative to other key inputs to AI (data
... (truncated, 98 KB total)Resource ID:
ec57d21ec35c1d02 | Stable ID: ZjJhMTQwMz