GovAI - Computing Power and the Governance of AI
governmentCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Centre for the Governance of AI
A foundational GovAI report (Feb 2024) on compute governance as an AI policy instrument, widely cited in policy discussions around export controls, chip monitoring, and hardware-level AI safety mechanisms.
Metadata
Summary
A GovAI report examining compute governance as a lever for AI policy, arguing that AI chips' detectability, excludability, and quantifiability make compute a uniquely tractable governance target. The report covers mechanisms like tracking, subsidizing, restricting access, and embedding hardware guardrails, while cautioning that compute governance carries risks of civil liberties violations, power concentration, and authoritarian misuse.
Key Points
- •Training compute for leading AI systems increased 350 million-fold over thirteen years, making it a central driver of AI progress.
- •Compute is governable because it is detectable, excludable (physical goods), and quantifiable, aided by a highly concentrated supply chain.
- •Governments can govern compute via tracking/monitoring, subsidizing/restricting access, and embedding hardware-level guardrails.
- •Compute governance is a double-edged sword: it can advance safety goals but risks enabling authoritarianism, civil liberties infringements, and entrenching existing power imbalances.
- •The report is co-authored by 19 researchers across academia, civil society, and industry, including OpenAI employees, warranting critical engagement with potential biases.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Safety Intervention Effectiveness Matrix | Analysis | 73.0 |
| Compute Governance | Concept | 58.0 |
Cached Content Preview
Computing Power and the Governance of AI | GovAI
About Research Opportunities Team Analysis Alumni Updates Donate Computing Power and the Governance of AI
Recent AI progress has largely been driven by increases in the amount of computing power used to train new models. Governing compute could be an effective way to achieve AI policy goals, but could also introduce new societal risks.
Lennart Heim,* Markus Anderljung, Emma Bluemke, Robert Trager
Research Posts February 14, 2024
This post summarises a new report, “Computing Power and the Governance of Artificial Intelligence.” The full report is a collaboration between nineteen researchers from academia, civil society, and industry. It can be read here .
GovAI research blog posts represent the views of their authors, rather than the views of the organisation.
Summary
Computing power – compute for short – is a key driver of AI progress. Over the past thirteen years, the amount of compute used to train leading AI systems has increased by a factor of 350 million. This has enabled the major AI advances that have recently gained global attention.
Governments have taken notice. They are increasingly engaged in compute governance : using compute as a lever to pursue AI policy goals, such as limiting misuse risks, supporting domestic industries, or engaging in geopolitical competition.
There are at least three ways compute can be used to govern AI. Governments can:
Track or monitor compute to gain visibility into AI development and use
Subsidise or limit access to compute to shape the allocation of resources across AI projects
Monitor activity, limit access, or build “guardrails” into hardware to enforce rules
Compute governance is a particularly important approach to AI governance because it is feasible. Compute is detectable : training advanced AI systems requires tens of thousands of highly advanced AI chips, which cannot be acquired or used inconspicuously. It is excludable : AI chips, being physical goods, can be given to or taken away from specific actors and in cases of specific uses. And it is quantifiable : chips, their features, and their usage can be measured. Compute’s detectability and excludability are further enhanced by the highly concentrated structure of the AI supply chain: very few companies are capable of producing the tools needed to design advanced chips, the machines needed to make them, or the data centers that house them.
However, just because compute can be used as a tool to govern AI doesn’t mean that it should be used in all cases. Compute governance is a double-edged sword, with both potential benefits and the risk of negative consequences: it can support widely shared goals like safety, but it can also be used to infringe on civil liberties, perpetuate existing power structures, and entrench authoritarian regimes. Indeed, some things are better ungoverned.
In our paper we argue that compute is a particularly promising
... (truncated, 18 KB total)482b71342542a659 | Stable ID: MTJhMTFmOW