Back
The Role of Compute Thresholds for AI Governance
webA legal-academic analysis relevant to AI governance researchers and policymakers examining how compute thresholds function within regulatory frameworks like the EU AI Act and US executive orders on AI safety.
Metadata
Importance: 72/100policy briefanalysis
Summary
This article analyzes training compute thresholds as a regulatory tool for AI governance, examining their use in identifying high-risk AI models. It outlines the advantages of compute as a regulatory metric (quantifiability, verifiability, scalability) while acknowledging limitations like algorithmic efficiency gains, and recommends treating compute thresholds as filters triggering further scrutiny rather than definitive risk measures.
Key Points
- •Training compute thresholds correlate with model performance and risk, making them useful for targeting regulatory requirements like reporting and capability evaluations.
- •Compute is attractive as a regulatory metric because it is objective, quantifiable, estimable before training, and verifiable after training.
- •Thresholds enable narrow targeting of powerful AI systems without burdening small companies, academics, or individual researchers.
- •Algorithmic innovation could reduce compute needed for advanced models, so thresholds must include update mechanisms to remain relevant.
- •Both the US (Biden EO 14110) and EU AI Act have adopted compute thresholds as triggers for compliance obligations, signaling regulatory convergence.
Cited by 5 pages
| Page | Type | Quality |
|---|---|---|
| Intervention Timing Windows | Analysis | 72.0 |
| Model Registries | Concept | 68.0 |
| Compute Monitoring | Approach | 69.0 |
| Pause Advocacy | Approach | 91.0 |
| Compute Thresholds | Concept | 91.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202681 KB


## Abstract
Advances in artificial intelligence (“AI”) could bring transformative changes in society. AI has the potential for immense opportunities and benefits across a wide range of sectors, from healthcare and drug discovery to public services, and it could broadly improve productivity and living standards. However, more capable AI models also have the potential to cause extreme harm. AI could be misused for more effective disinformation, surveillance, cyberattacks, and development of chemical and biological weapons. More capable models are also likely to possess unexpected dangerous capabilities not yet observed in existing models. Laws can mitigate these risks, but in doing so must identify which models pose the greatest dangers and thus warrant regulatory attention.
This Article discusses the role of training compute thresholds, which use training compute to determine which potentially dangerous models are subject to legal requirements, such as reporting and evaluations. Since the amount of compute used to train a model corresponds to performance, with occasional surprising leaps, a training compute threshold (1) can be used to target the desired level of performance and corresponding risk. Several further properties of compute make it an attractive regulatory target: it is (2) essential for training, (3) objective and quantifiable, (4) capable of being estimated before training, and (5) verifiable after training. Since the amount of compute necessary to train cutting-edge models costs millions of dollars and usually relies on specialized hardware, training compute thresholds also (6) enable regulators to narrowly target potentially dangerous AI systems without burdening small companies, academic institutions, and individual researchers.
However, training compute thresholds are not infallible. Training compute is not an exhaustive measurement of risk; It does not track all risks posed by AI and is not a precise indicator of how harmful a model may be. Technological changes, such as algorithmic innovation, could also significantly reduce how much compute is needed to train an advanced model. For these reasons, a training compute threshold should be treated as a filter and a trigger for further scrutiny, rather than an end in and of itself, and accompanied by a mechanism for updating the threshold.
Indeed, the United States and the European Union (“EU”) have recognized the significance of compute in recent initiatives, which seek to ensure the safe and responsible development of AI in part by establishing training compute thresholds that trigger reporting requirements, capability evaluations, and incident monito
... (truncated, 81 KB total)Resource ID:
510c42bfa643b8de | Stable ID: M2FlMTE1NT