Back
16 companies committed to publish frontier AI safety protocols
webmontrealethics.ai·montrealethics.ai/ai-policy-corner-frontier-ai-safety-com...
This article covers a significant 2024 international AI governance milestone where leading AI companies made voluntary public commitments on frontier model safety protocols, relevant to tracking industry self-regulation efforts.
Metadata
Importance: 62/100blog postnews
Summary
At the AI Seoul Summit 2024, 16 major AI companies committed to publishing frontier AI safety protocols, building on the Bletchley Declaration. The commitments outline expectations for how companies should identify and manage catastrophic risks from frontier AI models, including requirements for red-teaming, evaluations, and risk thresholds.
Key Points
- •16 frontier AI companies signed safety commitments at the Seoul AI Summit in May 2024, pledging to publish model safety frameworks.
- •Commitments include publishing safety policies before or alongside new frontier model releases, covering risk assessment and mitigation.
- •Companies agreed to define thresholds at which identified risks would be deemed too severe to deploy or continue developing a model.
- •The commitments build on the 2023 Bletchley Declaration and represent a step toward voluntary industry self-governance on AI safety.
- •Signatory companies include major labs such as OpenAI, Google DeepMind, Anthropic, Meta, and Microsoft, among others.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Lab Safety Culture | Approach | 62.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202612 KB
- [Skip to main content](https://montrealethics.ai/ai-policy-corner-frontier-ai-safety-commitments-ai-seoul-summit-2024/#genesis-content)
- [Skip to secondary menu](https://montrealethics.ai/ai-policy-corner-frontier-ai-safety-commitments-ai-seoul-summit-2024/#genesis-mobile-nav-secondary)
- [Skip to primary sidebar](https://montrealethics.ai/ai-policy-corner-frontier-ai-safety-commitments-ai-seoul-summit-2024/#genesis-sidebar-primary)
- [Skip to footer](https://montrealethics.ai/ai-policy-corner-frontier-ai-safety-commitments-ai-seoul-summit-2024/#genesis-footer-widgets)
Menu


_✍️_ By **Alexander Wilhelm.**
Alexander is a PhD Student in Political Science and a Graduate Affiliate at the [Governance and Responsible AI Lab (GRAIL)](https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html), Purdue University.
* * *
**📌 Editor’s Note:** This article is part of our [AI Policy Corner](https://montrealethics.ai/category/insights/ai-policy-corner/) series, a collaboration between the **Montreal AI Ethics Institute (MAIEI)** and the [Governance and Responsible AI Lab (GRAIL)](https://cla.purdue.edu/academic/polsci/research/labs/grail/index.html) at **Purdue University.** The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance.
* * *
## Frontier AI Safety Commitments, AI Seoul Summit 2024
Discussions between governments, civil society, and companies on the ‘safe’ development of AI have advanced through collaborations such as the [AI Safety Summit 2023](https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023) held in the UK and the [AI Seoul Summit 2024](https://www.gov.uk/government/topical-events/ai-seoul-summit-2024). Led by the United Kingdom and the Republic of South Korea, the Seoul Summit resulted in a framework of commitments, known as the [Frontier AI Safety Commitments](https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024), which 20 organizations, including Anthropic, Microsoft, NVIDIA, and OpenAI, have agreed to. These commitments required signatories to publish “a safety framework focused on severe risks” at the AI Summit in France in February 2025 (See [The AI Ethics Brief #158](https://brief.montrealethics.ai/p/the-ai-ethics-brief-158-paris-ai) for more on the Paris AI Action Summit). However, rhetoric at the Paris Summit emphasized the [benefits of AI rather than its potential harms and risks,](https://www.nytimes.com/2025/02/10/technology/ai-summit-paris-technology.html) rais
... (truncated, 12 KB total)Resource ID:
c7bf226bdc483bf6 | Stable ID: MTg4NWJmOD