Skip to content
Longterm Wiki
Back

Training Compute Thresholds: Features and Functions in AI Regulation | GovAI

government

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Centre for the Governance of AI

A GovAI policy paper providing technical and regulatory grounding for compute-based AI governance thresholds; directly relevant to understanding the rationale behind the 10^26 FLOP thresholds used in the EU AI Act and US AI Executive Order.

Metadata

Importance: 72/100working paperanalysis

Summary

This paper evaluates training compute as a regulatory metric for identifying high-risk general-purpose AI models, arguing it is currently the best available proxy due to its correlation with capabilities, early measurability, and external verifiability. The authors position compute thresholds as an initial filter to trigger further scrutiny—such as evaluations and risk assessments—rather than as standalone determinants of mitigation requirements. The paper directly informs real-world regulatory frameworks including the EU AI Act and US executive orders.

Key Points

  • Training compute is currently the most suitable metric for regulatory oversight of GPAI models due to its quantifiability, early availability, and external verifiability.
  • Compute thresholds should function as an initial filter triggering further scrutiny (e.g., capability evaluations, risk assessments), not as direct determinants of mitigation measures.
  • Compute is an imperfect risk proxy—some high-risk models may fall below thresholds while some above may pose limited risk—so threshold design requires care.
  • The paper directly engages with how compute thresholds are implemented in US (Executive Order 14110) and EU (AI Act) regulatory frameworks.
  • As algorithmic efficiency improves over time, fixed compute thresholds will need regular recalibration to remain meaningful proxies for risk.

Cited by 2 pages

PageTypeQuality
Pause AdvocacyApproach91.0
Compute ThresholdsConcept91.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20266 KB
![Training Compute Thresholds: Features and Functions in AI Regulation](https://www.governance.ai/research-paper/training-compute-thresholds-features-and-functions-in-ai-regulation)

# Training Compute Thresholds: Features and Functions in AI Regulation

Regulators in the US and EU are using thresholds based on training compute—the number of computational operations used in training—to identify general-purpose artificial intelligence (GPAI) models that may pose risks of large-scale societal harm. We argue that training compute currently is the most suitable metric to identify GPAI models that deserve regulatory oversight and further scrutiny. Training compute correlates with model capabilities and risks, is quantifiable, can be measured early in the AI lifecycle, and can be verified by external actors, among other advantageous features. These features make compute thresholds considerably more suitable than other proposed metrics to serve as an initial filter to trigger additional regulatory requirements and scrutiny. However, training compute is an imperfect proxy for risk. As such, compute thresholds should not be used in isolation to determine appropriate mitigation measures. Instead, they should be used to detect potentially risky GPAI models that warrant regulatory oversight, such as through notification requirements, and further scrutiny, such as via model evaluations and risk assessments, the results of which may inform which mitigation measures are appropriate. In fact, this appears largely consistent with how compute thresholds are used today. As GPAI technology and market structures evolve, regulators should update compute thresholds and complement them with other metrics into regulatory review processes.

[![](https://cdn.prod.website-files.com/614b70a71b9f715e7e40c791/6165adc3589cb4071fb58592_paper.png)\\
Read paper](https://arxiv.org/pdf/2405.10799) [![](https://cdn.prod.website-files.com/614b70a71b9f715e7e40c791/6165adc3589cb4071fb58592_paper.png)\\
Read paper](https://www.governance.ai/research-paper/training-compute-thresholds-features-and-functions-in-ai-regulation#) [**Theme** \\
AI Regulation](https://www.governance.ai/theme/ai-regulation)

##### Date

August 7, 2024

##### author

##### s

Lennart Heim, Leonie Koessler

##### Share

![twitter sharing button](https://platform-cdn.sharethis.com/img/twitter.svg)

![linkedin sharing button](https://platform-cdn.sharethis.com/img/linkedin.svg)

![facebook sharing button](https://platform-cdn.sharethis.com/img/facebook.svg)

![email sharing button](https://platform-cdn.sharethis.com/img/email.svg)

## Research Summary

##### Footnotes

##### Further reading

## Related publications

[**AI Regulation**\\
\\
**Labeling of AI Agent Activity in Article 50 of the EU AI Act**\\
\\
**November 2025**  **Policy Brief** **Alan Chan**\\
\\
The online activities of AI agents could distort human beliefs and behaviors. For example, humans could mistake...](https://www.governance.ai/research-paper/

... (truncated, 6 KB total)
Resource ID: d76d92e6cd91fb5d | Stable ID: NmY0MWUxYz