Skip to content
Longterm Wiki
Back

Frontier Model Forum - Membership

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Frontier Model Forum

The Frontier Model Forum is a major industry-led AI safety governance body; this page documents its membership standards, relevant for understanding how leading AI labs are coordinating on safety commitments.

Metadata

Importance: 45/100homepagereference

Summary

This page outlines the membership criteria and current members of the Frontier Model Forum (FMF), an industry non-profit focused on safe frontier AI development. Members must demonstrate frontier AI capability, a track record of safety practices, and commitment to contributing to FMF activities. Current members include Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI.

Key Points

  • FMF members must prove frontier AI capability at scale, including expertise and resources for large-scale model development or deployment.
  • Members must publicly acknowledge AI safety risks and disclose guidelines for evaluating and mitigating them.
  • Safety review processes must be in place throughout the model lifecycle, with ability to delay or suspend deployments.
  • Members must support AI safety research, including third-party evaluations and societal impact studies.
  • Membership requires a minimum three-year financial and participatory commitment to FMF workshops and activities.

Cited by 1 page

PageTypeQuality
Frontier Model ForumOrganization58.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20262 KB
[Skip to content](https://www.frontiermodelforum.org/membership/#wp--skip-link--target)

## Membership

The Frontier Model Forum is an industry non-profit dedicated to advancing the safe development and deployment of frontier AI systems.

The FMF draws on the technical and operational expertise of its member firms to benefit the broader AI safety ecosystem. We seek to admit members with a clear ability to develop and deploy frontier AI systems at scale, a proven commitment to AI safety and security, and a willingness to contribute to the mission of the FMF.

The current members of the FMF are Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI.

## **Member criteria**

The FMF has three high level criteria:

**Frontier AI capability**

- _Proven ability_ **:** Member firms must provide evidence that they have the expertise and resources to develop or deploy at scale [frontier AI models](https://www.frontiermodelforum.org/about-us/).

**Track record of safety**

- _Documentation_ **:** Member firms must publicly acknowledge that frontier AI models pose both public safety and societal risks, and publicly disclose guidelines for evaluating and mitigating those risks.

- _Safety review processes:_ Member firms must provide evidence that they have processes in place for assessing the safety of frontier models throughout their lifecycle, and that those processes may lead to the delay or suspension of model deployment.

- _Research and evaluations_ **:** Member firms must provide evidence that they support AI safety research, including both third-party evaluations and independent studies on the societal impact of frontier AI.

**Clear intentions to contribute:**

- **Commitment to the FMF:** Member firms must commit to participating in FMF workshops and activities, and to financially supporting the FMF’s safety efforts for a minimum of three years.

## **Contact us**

To inquire about FMF membership, please contact [membership@frontiermodelforum.org](mailto:membership@frontiermodelforum.org).
Resource ID: 41d13b1c0f84aee9 | Stable ID: OTdhNjBjYm