Skip to content
Longterm Wiki
Back

Mechanisms to Verify International Agreements About AI Development

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: ResearchGate

A June 2025 CSET-affiliated preprint providing a practical policy framework for verifying international AI agreements, relevant to researchers and policymakers working on AI governance and arms control analogies.

Metadata

Importance: 72/100working paperanalysis

Summary

This report surveys verification mechanisms for international AI development agreements, examining how countries could confirm compliance with agreed-upon rules across three example policy goals. It argues that while many ideal technical verification solutions remain infeasible, increased physical access (e.g., data center inspections) can substitute, making ambitious international coordination achievable with sufficient political will.

Key Points

  • International AI agreements require robust verification mechanisms to detect violations and build mutual confidence between signatories.
  • Three example policy goals are analyzed to demonstrate practical verification approaches for AI development and deployment claims.
  • Physical inspections of data centers can substitute for technically infeasible cryptographic or software-based verification methods.
  • Compute monitoring is identified as a key verification target, with different mechanisms suited to different scales of compute.
  • Approaches are applicable both to international state-level agreements and domestic regulation of AI companies.

Cited by 1 page

PageTypeQuality
International Compute RegimesConcept67.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
PreprintPDF Available

# Mechanisms to Verify International Agreements About AI Development

- June 2025

DOI: [10.48550/arXiv.2506.15867](https://doi.org/10.48550/arXiv.2506.15867)

- License
- [CC BY 4.0](https://www.researchgate.net/deref/https%3A%2F%2Fcreativecommons.org%2Flicenses%2Fby%2F4.0%2F)

Authors:

[![Aaron Scher](https://c5.rgstatic.net/m/448675030402/images/icons/icons/author-avatar.svg)](https://www.researchgate.net/scientific-contributions/Aaron-Scher-2316886225)

[Aaron Scher](https://www.researchgate.net/scientific-contributions/Aaron-Scher-2316886225)

[Aaron Scher](https://www.researchgate.net/scientific-contributions/Aaron-Scher-2316886225)

- This person is not on ResearchGate, or hasn't claimed this research yet.


[![Lisa Thiergart](https://c5.rgstatic.net/m/448675030402/images/icons/icons/author-avatar.svg)](https://www.researchgate.net/scientific-contributions/Lisa-Thiergart-2259142215)

[Lisa Thiergart](https://www.researchgate.net/scientific-contributions/Lisa-Thiergart-2259142215)

[Lisa Thiergart](https://www.researchgate.net/scientific-contributions/Lisa-Thiergart-2259142215)

- This person is not on ResearchGate, or hasn't claimed this research yet.


Preprints and early-stage research may not have been peer reviewed yet.

[Download file PDF](https://www.researchgate.net/publication/392917600_Mechanisms_to_Verify_International_Agreements_About_AI_Development/fulltext/6858c59fe8fa0f5c2825c0b9/Mechanisms-to-Verify-International-Agreements-About-AI-Development.pdf)

[Read file](https://www.researchgate.net/publication/392917600_Mechanisms_to_Verify_International_Agreements_About_AI_Development#read)

[Download citation](https://www.researchgate.net/publication/392917600_Mechanisms_to_Verify_International_Agreements_About_AI_Development/citation/download)

Copy link Link copied

* * *

[Read file](https://www.researchgate.net/publication/392917600_Mechanisms_to_Verify_International_Agreements_About_AI_Development#read) [Download citation](https://www.researchgate.net/publication/392917600_Mechanisms_to_Verify_International_Agreements_About_AI_Development/citation/download)
Copy link Link copied

## Abstract and Figures

International agreements about AI development may be required to reduce catastrophic risks from advanced AI systems. However, agreements about such a high-stakes technology must be backed by verification mechanisms--processes or tools that give one party greater confidence that another is following the agreed-upon rules, typically by detecting violations. This report gives an overview of potential verification approaches for three example policy goals, aiming to demonstrate how countries could practically verify claims about each other's AI development and deployment. The focus is on international agreements and state-involved AI development, but these approaches could also be applied to domestic regulation of companies. While many of the ideal solutions for verification are not yet technologically fea

... (truncated, 98 KB total)
Resource ID: 397d960772172592 | Stable ID: NGMzYmU1NW