Skip to content
Longterm Wiki
Back

Fenwick: Interesting Developments for Regulatory Thresholds of AI Compute

web

A law firm analysis relevant to AI governance practitioners tracking how compute-based regulatory thresholds are being developed; useful for understanding the legal landscape around frontier AI model oversight.

Metadata

Importance: 52/100organizational reportanalysis

Summary

A legal analysis from Fenwick examining evolving regulatory frameworks that use computational thresholds (e.g., FLOP counts) to define which AI systems trigger oversight requirements. The piece reviews key policy developments at state and federal levels relevant to frontier AI governance.

Key Points

  • Explores how regulators are using compute thresholds (measured in FLOPs) as proxies to identify frontier AI models subject to oversight.
  • Reviews recent legislative and regulatory activity at both state and federal levels targeting high-compute AI systems.
  • Discusses practical implications for AI developers regarding compliance obligations tied to training compute.
  • Highlights the tension between compute-based thresholds as a tractable regulatory mechanism vs. their limitations as AI architectures evolve.
  • Provides a legal perspective on how threshold definitions may shape industry practices and model development decisions.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 15, 202611 KB
Technological Challenges for Regulatory Thresholds of AI… | Fenwick 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 What You Need To Know

 
 Generally, larger AI models that require sizable compute for training are more likely to be subject to regulation and additional compliance requirements. 
 But, some new technological methods, such as novel model architectures (like the Mixture of Agents approach) and distributed compute, may challenge compute-based regulation. 
 Regulation is bound to evolve along with the industry, so it’s crucial to keep up to date on technological and regulatory advances alike. 
 
 
 
 
 
 
 
 
 In order to train generative AI models, in particular frontier large language models (LLMs) and multimodal models, it is necessary to perform vast amounts of computations, typically carried out on massive clusters of Graphics Processing Units (GPUs) or other specialized AI chips. We have also seen the emergence of certain scaling laws , showing how much compute and data are required for optimal training based on model size, with the general pattern being up and to the right when it comes to data, number of parameters, and compute. Regulators have expressed interest in applying certain additional measures on these especially capable foundation models, and in an attempt to demarcate such models, regulators have focused on the amount of compute used to train them. We have seen some trends emerge in recent AI regulation and legislation on compute thresholds for particularly capable AI systems, for instance:


 Article 51 of the EU AI Act specifies 10^25 floating point operations (FLOPs) as the threshold for a general-purpose AI system being deemed a systemic risk (viz. possessing high-impact capabilities), and hence being subject to additional regulatory requirements.
 
 President Biden’s Executive Order 14110 on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (Executive Order) specifies 10^26 FLOPs as the threshold for triggering certain reporting obligations to the Federal Government (Section 4.2(b)(i)) and being deemed a dual-use foundation model capable of malicious cyber-enabled activity (Section 4.2(c)(iii))
 
 More Compute, More Money 


 But what do these compute thresholds practically mean? Jack Clark, co-founder and director of policy at Anthropic, provided some back-of-the-napkin math suggesting that a model trained with 10^25 FLOPs of compute would cost $7–10M to train, whereas a model trained with 10^26 FLOPs of compute would cost $70–100M to train. This comports with California’s proposed SB 1047 , which asserts that a covered model is either one trained with 10^26 FLOPs of compute, or one that cost greater than $100M to train. Hence the regulatory and legislative trend appears to be that larger models trained with sizable compute are more likely to be subject to regulation and additional compliance requirements, however there is some misalignm

... (truncated, 11 KB total)
Resource ID: 11744b15b6c17b92 | Stable ID: OGMyZmYwMT