Skip to content
Longterm Wiki
Back

Third-party compliance reviews for frontier AI safety frameworks

paper

Authors

Aidan Homewood·Sophie Williams·Noemi Dreksler·John Lidiard·Malcolm Murray·Lennart Heim·Marta Ziosi·Seán Ó hÉigeartaigh·Michael Chen·Kevin Wei·Christoph Winter·Miles Brundage·Ben Garfinkel·Jonas Schuett

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Relevant for those working on AI governance and accountability mechanisms, particularly how to operationalize safety commitments made by frontier labs through credible third-party verification.

Paper Details

Citations
2
0 influential
Year
2025
Methodology
survey

Metadata

Importance: 72/100arxiv preprintanalysis

Abstract

Safety frameworks have emerged as a best practice for managing risks from frontier artificial intelligence (AI) systems. However, it may be difficult for stakeholders to know if companies are adhering to their frameworks. This paper explores a potential solution: third-party compliance reviews. During a third-party compliance review, an independent external party assesses whether a frontier AI company is complying with its safety framework. First, we discuss the main benefits and challenges of such reviews. On the one hand, they can increase compliance with safety frameworks and provide assurance to internal and external stakeholders. On the other hand, they can create information security risks, impose additional cost burdens, and cause reputational damage, but these challenges can be partially mitigated by drawing on best practices from other industries. Next, we answer practical questions about third-party compliance reviews, namely: (1) Who could conduct the review? (2) What information sources could the reviewer consider? (3) How could compliance with the safety framework be assessed? (4) What information about the review could be disclosed externally? (5) How could the findings guide development and deployment actions? (6) When could the reviews be conducted? For each question, we evaluate a set of plausible options. Finally, we suggest "minimalist", "more ambitious", and "comprehensive" approaches for each question that a frontier AI company could adopt.

Summary

This paper examines how third-party compliance reviews could be used to verify whether frontier AI labs are adhering to their published safety frameworks and commitments. It analyzes the design, scope, and limitations of such reviews, drawing on analogies from other high-stakes industries, and proposes concrete mechanisms to strengthen accountability in AI safety governance.

Key Points

  • Argues that voluntary safety frameworks from frontier AI labs lack credible external verification, creating accountability gaps.
  • Proposes structured third-party compliance review processes analogous to auditing regimes in finance, aviation, and nuclear sectors.
  • Discusses key design choices: reviewer independence, scope of review, access to information, and public disclosure of findings.
  • Identifies challenges including proprietary information concerns, lack of standardized metrics, and regulatory capacity constraints.
  • Positions compliance reviews as a near-term governance tool that complements but does not replace deeper regulatory oversight.

Cited by 1 page

PageTypeQuality
Seoul Declaration on AI SafetyPolicy60.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20266 KB
# Third-party compliance reviews    for frontier AI safety frameworks

Aidan Homewood 1  Sophie Williams1  Noemi Dreksler1John Lidiard1  Malcolm Murray2  Lennart Heim1  Marta Ziosi 3Seán Ó hÉigeartaigh4  Michael Chen5  Kevin Wei6Christoph Winter4,7  Miles Brundage8  Ben Garfinkel1Jonas Schuett1

1Centre for the Governance of AI
2SaferAI
3Oxford Martin AI Governance Initiative

4Leverhulme Centre for the Future of Intelligence, University of Cambridge

5METR
6Harvard University
7Institute for Law & AI
8Independent
Corresponding author: [aidan.homewood@governance.ai](https://ar5iv.labs.arxiv.org/html/aidan.homewood@governance.ai "").The author contributed to this work in a personal capacity, independent of their role in the European Union General-Purpose AI Code of Practice.

###### Abstract

Safety frameworks have emerged as a best practice for managing risks from frontier artificial intelligence (AI) systems. However, it may be difficult for stakeholders to know if companies are adhering to their frameworks. This paper explores a potential solution: third-party compliance reviews. During a third-party compliance review, an independent external party assesses whether a frontier AI company is complying with its safety framework. First, we discuss the main benefits and challenges of such reviews. On the one hand, they can increase compliance with safety frameworks and provide assurance to internal and external stakeholders. On the other hand, they can create information security risks, impose additional cost burdens, and cause reputational damage, but these challenges can be partially mitigated by drawing on best practices from other industries. Next, we answer practical questions about third-party compliance reviews, namely: (1) Who could conduct the review? (2) What information sources could the reviewer consider? (3) How could compliance with the safety framework be assessed? (4) What information about the review could be disclosed externally? (5) How could the findings guide development and deployment actions? (6) When could the reviews be conducted? For each question, we evaluate a set of plausible options. Finally, we suggest “minimalist”, “more ambitious”, and “comprehensive” approaches for each question that a frontier AI company could adopt.

## Executive summary

This paper makes the case for third-party compliance reviews for frontier AI safety frameworks and answers practical questions about how to conduct them.

### What are third-party compliance reviews? ( [Section˜1](https://ar5iv.labs.arxiv.org/html/2505.01643\#S1 "1 Introduction ‣ Third-party compliance reviews for frontier AI safety frameworks"))

During a third-party compliance review, an independent external party assesses whether a frontier AI company complies with its safety framework. Anthropic and G42 have already committed to commissioning such reviews, while the third draft of the EU General-Purpose AI Code of Practice recommends that companies assess whether they will adh

... (truncated, 6 KB total)
Resource ID: 09e770780facb529 | Stable ID: MjQzMWY0OG