Beyond Corporate Promises: How Government Can Follow Through on AI Preparedness
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: CSET Georgetown
A 2025 CSET policy brief by Kendrea Beers offering pragmatic near-term U.S. government actions to complement voluntary AI preparedness frameworks; relevant to debates on AI governance and public-private coordination on frontier AI risks.
Metadata
Summary
This CSET policy brief argues that while AI companies have adopted voluntary preparedness frameworks, the U.S. government should take three targeted near-term actions: sharing national security intelligence with AI companies to counter model theft, promoting transparency into frontier AI development, and facilitating industry best practices for risk management. These light-touch measures are framed as practical complements to private sector efforts while comprehensive AI legislation remains years away.
Key Points
- •Frontier AI governance currently relies heavily on voluntary preparedness frameworks from companies like Anthropic, Google DeepMind, and OpenAI, which have real limitations without government support.
- •Advanced AI model weights are vulnerable to theft by state actors like the CCP, and the U.S. government should share cybersecurity intelligence to help AI companies protect their systems.
- •Government can promote transparency into frontier AI development through regulatory oversight without waiting for comprehensive legislation.
- •Facilitating industry-wide best practices for risk management is a low-cost government action that can complement existing private sector commitments.
- •The proposed measures are explicitly 'light-touch' and near-term, designed as a pragmatic bridge until more comprehensive AI governance frameworks are established.
Cached Content Preview
[Skip to main content](https://cset.georgetown.edu/article/beyond-corporate-promises/#main)
### CSET
# Beyond Corporate Promises: How Government Can Follow Through on AI Preparedness
Kendrea Beers
July 21, 2025
Frontier AI capabilities show no sign of slowing down so that governance can catch up, yet national security challenges need addressing in the near term. This blog post outlines a governance approach that complements existing commitments by AI companies. This post argues the government should take targeted actions toward AI preparedness: sharing national security expertise, promoting transparency into frontier AI development, and facilitating the development of best practices.
Frontier AI capabilities are advancing rapidly, posing increasingly pressing national security risks, and showing no sign of slowing down so that governance can catch up. While comprehensive regulation or legislation may be years away, targeted government actions can improve AI preparedness in the near term. This blog post outlines relatively light-touch, low-cost measures in key areas that could offer practical benefits with minimal downside risk.
Given the need for near-term preparedness measures, a pragmatic policy approach is to build on existing efforts by the private sector. So far, frontier AI governance in the private sector primarily takes the form of voluntary _preparedness frameworks,_ such as Anthropic’s [Responsible Scaling Policy](https://www.anthropic.com/news/anthropics-responsible-scaling-policy) and Google DeepMind’s [Frontier Safety Framework](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/updating-the-frontier-safety-framework/Frontier%20Safety%20Framework%202.0.pdf). These frameworks, which draw from research by [civil](https://metr.org/blog/2023-09-26-rsp/) [society](https://cltc.berkeley.edu/publication/intolerable-ai-risk-thresholds/), describe how companies plan to manage risks from emerging AI capabilities. They are scoped to risks of severe harm; as a representative example, OpenAI’s [Preparedness Framework](https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf) focuses on risks from cybersecurity, biological, chemical, and AI self-improvement capabilities. In Seoul in 2024, leading AI companies signed onto the [Frontier AI Safety Commitments](https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024): they promised to manage risks effectively, hold themselves accountable for safely developing and deploying their systems, and be transparent to external actors, including governments.
These are challenging goals, but companies need not take on these challenges alone. The U.S. government can and should assist with national-security-relevant AI preparedness work that it is especially well suited for. The government should take action in three key areas:
1. Sharing national security expertise
... (truncated, 25 KB total)23322ce23eea116e | Stable ID: ZTQ0MzVjYT