Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Frontier Model Forum

The Frontier Model Forum is an industry body founded by Anthropic, Google, Microsoft, and OpenAI to coordinate on AI safety; this update reflects industry-led self-governance efforts as of mid-2024.

Metadata

Importance: 52/100organizational reportnews

Summary

The Frontier Model Forum (FMF), an industry consortium of leading AI labs, provides a 2024 progress update on its AI safety initiatives, including workstreams addressing biosecurity, cybersecurity, model security, safety evaluations, and an AI Safety Fund. The update details early best practices development, expert workshops, and participation in international AI safety governance events like the AI Seoul Summit.

Key Points

  • FMF launched domain-specific workstreams on Frontier AI & Bio, Frontier AI & Cyber, and Frontier AI Security to develop threat models, evaluations, and mitigations.
  • A general Safety Evaluations workstream published early best practices briefs informed by expert workshops and interviews.
  • The AI Safety Fund completed an initial funding round, directing resources toward frontier AI safety research.
  • FMF participated in the AI Seoul Summit, signaling engagement with international AI governance coordination.
  • The forum convenes internal standing groups and external expert workshops to build shared industry standards across member organizations.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 20269 KB
[Skip to content](https://www.frontiermodelforum.org/updates/progress-update-advancing-frontier-ai-safety-in-2024-and-beyond/#wp--skip-link--target)

## Progress Update: Advancing Frontier AI Safety in 2024 and Beyond

By:

Frontier Model Forum

Posted on:

29th August 2024

The core mission of the Frontier Model Forum is to advance frontier AI safety. By identifying best practices, supporting scientific research, and facilitating greater information-sharing about frontier AI safety, we aim to meaningfully improve the safe development and deployment of the most advanced general-purpose AI systems.

We’re excited to share more about our early progress and plans in each of those areas. From participating in the AI Seoul Summit to establishing several new AI safety workstreams–as well as completing an initial round of the AI Safety Fund–we have leaned into our mission while also laying a strong foundation for the work ahead. We look forward to building on that foundation in the months and years to come.

**Initial Workstreams**

AI safety faces a wide range of open questions. Some are unique to specific threat domains: for instance, how do you measure a system’s understanding of synthetic biology protocols, or its ability to detect cyber vulnerabilities? By contrast, other urgent questions are common to AI safety evaluations and frameworks in general: for example, what should a coherent safety framework entail?

The workstreams of the Frontier Model Forum aim to develop both domain-specific and general best practices. Given their importance to public safety, we have launched three workstreams focused on the specific threat domains below and aim to publish regular updates as appropriate.

- **Frontier AI and Bio**. Although general-purpose AI systems hold significant promise for public health breakthroughs, they may also amplify or introduce biological threats to public safety. To develop best practices and shared understandings around threat models, evaluations, and mitigation strategies for biology, we have established an internal standing group of biosafety experts and also begun convening workshops with relevant biosecurity experts from research labs across academia, industry and government.

- **Frontier AI and Cyber**. As frontier AI systems become increasingly adept at understanding and writing code, the cybersecurity implications of their coding capabilities will be increasingly important. Although they may introduce novel or more efficient ways of defending against cyber attacks, frontier AI systems may also amplify existing threats and introduce new ones. As with our work on biology, we have begun to convene an internal standing group of cyber threat modeling and evaluation leads to establish best practices and shared understandings around threats models, evaluations, and mitigation strategies for the advanced cyber threats that emerge from AI capabilities.

- **Frontier AI Security**. As we noted in a recent [issue brief](https://www.front

... (truncated, 9 KB total)
Resource ID: 51e8802a5aef29f6 | Stable ID: OGQ0YWE3NW