Skip to content
Longterm Wiki
Back

Early Best Practices for Frontier AI Safety Evaluations

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Frontier Model Forum

The Frontier Model Forum is an industry consortium of leading AI labs; this page indexes their official publications on safety evaluation best practices, biosafety, cyber risk, and AI governance, making it a key reference for tracking industry-led safety norms.

Metadata

Importance: 62/100organizational reporthomepage

Summary

This is the publications index of the Frontier Model Forum (FMF), an industry body comprising leading AI labs, compiling their issue briefs, technical reports, research updates, and public comments on frontier AI safety. Topics span safety evaluations, biosafety thresholds, red teaming, cyber risks, compute measurement, and AI governance frameworks. It serves as a central hub for FMF's evolving best practices and policy contributions.

Key Points

  • Hosts a growing library of issue briefs on frontier AI safety topics including biosafety thresholds, red teaming, chain-of-thought monitorability, and adversarial distillation.
  • Technical reports cover capability assessments, frontier mitigations, third-party assessments, and cyber risk management in AI frameworks.
  • Includes public comments submitted to US government bodies (NIST, CAISI, AISI) on AI safety standards and policy.
  • Represents collaborative output from major frontier AI developers (Anthropic, Google, Microsoft, OpenAI) on shared safety norms.
  • Publications date from 2023 onward, reflecting the rapid evolution of industry-led AI safety evaluation practices.

Cited by 1 page

PageTypeQuality
AI Lab Safety CultureApproach62.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20266 KB
[Skip to content](https://www.frontiermodelforum.org/publications/#wp--skip-link--target)

## Publications

#### Issue Briefs

[Adversarial Distillation.](https://www.frontiermodelforum.org/issue-briefs/issue-brief-adversarial-distillation/) February 23, 2026.

[Chain of Thought Monitorability](https://www.frontiermodelforum.org/issue-briefs/chain-of-thought-monitorability/). January 27, 2026.

[Preliminary Taxonomy of AI-Bio Misuse Mitigations](https://www.frontiermodelforum.org/issue-briefs/preliminary-taxonomy-of-ai-bio-misuse-mitigations/). July 30, 2025.

[Frontier AI Biosafety Thresholds](https://www.frontiermodelforum.org/issue-briefs/frontier-ai-biosafety-thresholds/). May 12, 2025.

[Preliminary Reporting Tiers for AI-Bio Safety Evaluations.](https://www.frontiermodelforum.org/updates/issue-brief-preliminary-reporting-tiers-for-ai-bio-safety-evaluations/) March 18, 2025.

[Thresholds for Frontier AI Safety Frameworks](https://www.frontiermodelforum.org/updates/issue-brief-thresholds-for-frontier-ai-safety-frameworks/). February 7, 2025.

[Preliminary Taxonomy of AI-Bio Safety Evaluations](https://www.frontiermodelforum.org/updates/issue-brief-preliminary-taxonomy-of-ai-bio-safety-evaluations/). December 20, 2024.

[Preliminary Taxonomy of Pre-Deployment Frontier AI Safety Evaluations](https://www.frontiermodelforum.org/updates/issue-brief-preliminary-taxonomy-of-pre-deployment-frontier-ai-safety-evaluations/). December 20, 2024.

[AI for Cyber Defense](https://www.frontiermodelforum.org/updates/issue-brief-ai-for-cyber-defense/). November 22, 2024.

[Components of Safety Frameworks](https://www.frontiermodelforum.org/updates/issue-brief-components-of-frontier-ai-safety-frameworks/). November 8, 2024.

[Early Best Practices for Frontier AI Safety Evaluations](https://www.frontiermodelforum.org/updates/early-best-practices-for-frontier-ai-safety-evaluations/). July 31, 2024.

[Foundational Security Practices](https://www.frontiermodelforum.org/updates/issue-brief-foundational-security-practices/). July 31, 2024.

[Measuring Training Compute](https://www.frontiermodelforum.org/updates/issue-brief-measuring-training-compute/). May 2, 2024.

[What is Red Teaming?](https://www.frontiermodelforum.org/uploads/2023/10/FMF-AI-Red-Teaming.pdf). October 24, 2023.

#### Technical Reports

[Managing Advanced Cyber Risks in Frontier AI Frameworks](https://www.frontiermodelforum.org/technical-reports/managing-advanced-cyber-risks-in-frontier-ai-frameworks/). February 13, 2026.

[Third-Party Assessments](https://www.frontiermodelforum.org/technical-reports/third-party-assessments/). August 4, 2025.

[Frontier Mitigations](https://www.frontiermodelforum.org/technical-reports/frontier-mitigations/). June 30, 2025.

[Risk Taxonomy and Thresholds for Frontier AI Frameworks](https://www.frontiermodelforum.org/technical-reports/risk-taxonomy-and-thresholds/). June 18, 2025.

[Frontier Capability Assessments](https://www.frontiermodelforum.org/technical-reports

... (truncated, 6 KB total)
Resource ID: 5329d38ad33971ff | Stable ID: NTJmZDBiNG