Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: RAND Corporation

Published June 2025 by RAND's Technology and Security Policy Center, this report is directly relevant to implementing compute governance provisions similar to those in Executive Order 14110, and informs debates about the feasibility of compute-based AI monitoring regimes.

Metadata

Importance: 62/100organizational reportanalysis

Summary

This RAND Corporation report uses game-theoretic analysis to examine detection and monitoring mechanisms cloud service providers could use to identify large AI training runs. It finds significant gaps in monitoring schemes based solely on floating-point operation thresholds, showing many capable AI models could be trained without detection. The authors recommend hybrid compute and non-compute governance approaches and continued research into adaptive monitoring frameworks.

Key Points

  • FLOPs-only monitoring thresholds create significant detection gaps; many capable AI models could be trained without triggering reporting requirements.
  • Game-theoretic modeling reveals strategic vulnerabilities in compute governance schemes that adversarial actors could exploit.
  • Cloud service providers need more sophisticated detection strategies beyond simple compute thresholds to effectively monitor AI training runs.
  • Policymakers should pursue hybrid approaches combining compute-based and non-compute-based AI governance mechanisms.
  • Ongoing research into detection vulnerabilities and adaptive thresholds is necessary as AI capabilities and training methods evolve.

Cited by 1 page

PageTypeQuality
Compute MonitoringApproach69.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20265 KB
Strategies and Detection Gaps in a Game-Theoretic Model of Compute Governance | RAND 
 
 

 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 

 
 
 

 
 Skip to page content 

 

 

 
 

 
 

 This report documents research and analysis conducted as part of a study to investigate detection and monitoring mechanisms that cloud service providers could employ to identify large artificial intelligence (AI) training runs. The intended audience of this report is national policymakers interested in compute-based AI governance. This report may be of interest to cloud service providers and infrastructure-as-a-service companies.

 
 

 

 
 
 

 Strategies and Detection Gaps in a Game-Theoretic Model of Compute Governance


 Alvin Moon , Padmaja Vedula , Jesse Geneson , Simon Bar-on 

 
 Research Published Jun 16, 2025 

 
 
 
 
 
 

 
 
 
 
 
 

 
 
 
 Download PDF 
 
 
 
 
 
 
 

 
 

 

 
 Share on LinkedIn 
 Share on X 
 Share on Facebook 
 Email 
 
 
 
 This report documents research and analysis conducted as part of a study to investigate detection and monitoring mechanisms that cloud service providers could employ to identify large artificial intelligence (AI) training runs. The intended audience of this report is national policymakers interested in compute-based AI governance. This report may be of interest to cloud service providers and infrastructure-as-a-service companies.

 
 



 Key Findings


 
 Cloud service providers might not be able to report and detect AI training if they are only obligated to monitor and report activity based on floating-point operation thresholds in future AI governance policy.

 In the future, many types of capable AI models will likely exist whose training will be hard to detect. The authors outline strategies for cloud service providers that could aid with AI training detection in cloud computing environments.

 
 
 
 Recommendations


 
 To develop effective AI governance, policymakers should support efforts to find detection gaps in compute-based monitoring schemes.

 Policymakers should continue to pursue both compute- and noncompute-based AI governance.

 Continuing research into effective thresholds for compute monitoring is required to create a robust compute-based monitoring framework that can adapt to technological progress.

 
 
 

 

 
 
 
 Subscribe to the Policy Currents newsletter

 
 Email 
 Subscribe 
 
 

 
 Topics

 Artificial Intelligence 
 Network Analysis 
 
 

 
 
 

 
 Document Details

 

 
 Copyright: RAND Corporation
 Availability: Web-Only

 Year: 2025
 Pages: 25
 DOI: https://doi.org/10.7249/RRA3686-1 

 Document Number: RR-A3686-1

 

 

 

 

 
 Citation

 
 
 RAND Style Manual

 Moon, Alvin, Padmaja Vedula, Jesse Geneson, and Simon Bar-on, Strategies and Detection Gaps in a Game-Theoretic Model of Compute Governance, RAND Corporation, RR-A3686-1, 2025. As of February 25, 2026: https://www.rand.org/pubs/research_reports/RRA368

... (truncated, 5 KB total)
Resource ID: 1e392ce476b43c8f | Stable ID: ODYxYzU5Yj