Skip to content
Longterm Wiki
Back

Research tracking 30 indicators

paper

Authors

Jennifer Wang·Kayla Huang·Kevin Klyman·Rishi Bommasani

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

This paper evaluates major AI companies' compliance with voluntary White House commitments through a detailed scoring rubric, providing empirical evidence on governance effectiveness and corporate accountability in AI safety practices.

Paper Details

Citations
2
0 influential
Year
2025
Methodology
report

Metadata

arxiv preprintanalysis

Abstract

Voluntary commitments are central to international AI governance, as demonstrated by recent voluntary guidelines from the White House to the G7, from Bletchley Park to Seoul. How do major AI companies make good on their commitments? We score companies based on their publicly disclosed behavior by developing a detailed rubric based on their eight voluntary commitments to the White House in 2023. We find significant heterogeneity: while the highest-scoring company (OpenAI) scores a 83% overall on our rubric, the average score across all companies is just 53%. The companies demonstrate systemically poor performance for their commitment to model weight security with an average score of 17%: 11 of the 16 companies receive 0% for this commitment. Our analysis highlights a clear structural shortcoming that future AI governance initiatives should correct: when companies make public commitments, they should proactively disclose how they meet their commitments to provide accountability, and these disclosures should be verifiable. To advance policymaking on corporate AI governance, we provide three directed recommendations that address underspecified commitments, the role of complex AI supply chains, and public transparency that could be applied towards AI governance initiatives worldwide.

Summary

This research evaluates how well major AI companies fulfill their voluntary commitments to the White House in 2023 by developing a detailed scoring rubric across 30 indicators. The study finds significant variation in compliance, with OpenAI scoring highest at 83% while the average across 16 companies is only 53%. Most critically, companies show systemic failure on model weight security commitments, with 11 of 16 companies scoring 0%. The authors argue that future AI governance frameworks must require proactive, verifiable public disclosure of how companies meet their commitments and provide three recommendations to address underspecified commitments, supply chain complexity, and transparency gaps.

Cited by 1 page

PageTypeQuality
Voluntary AI Safety CommitmentsPolicy91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20261 KB
Conversion to HTML had a Fatal error and exited abruptly. This document may be truncated or damaged.

[◄](https://ar5iv.labs.arxiv.org/html/2508.08344) [![ar5iv homepage](https://ar5iv.labs.arxiv.org/assets/ar5iv.png)](https://ar5iv.labs.arxiv.org/) [Feeling\\
\\
lucky?](https://ar5iv.labs.arxiv.org/feeling_lucky) [Conversion\\
\\
report](https://ar5iv.labs.arxiv.org/log/2508.08345) [Report\\
\\
an issue](https://github.com/dginev/ar5iv/issues/new?template=improve-article--arxiv-id-.md&title=Improve+article+2508.08345) [View original\\
\\
on arXiv](https://arxiv.org/abs/2508.08345) [►](https://ar5iv.labs.arxiv.org/html/2508.08346)
Resource ID: cf3efa764186024d | Stable ID: MzMwZDY4Yj