Skip to content
Longterm Wiki
Back

METR's Analysis of Frontier AI Safety Cases (FAISC)

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: METR

METR (formerly ARC Evals) is a leading organization in frontier AI evaluation; this page likely presents their analytical framework or case studies used to inform safety assessments and deployment decisions for advanced AI systems.

Metadata

Importance: 62/100organizational reportanalysis

Summary

METR (Model Evaluation and Threat Research) provides analysis related to frontier AI safety cases, likely examining evaluation frameworks and safety benchmarks for advanced AI systems. The resource appears to document METR's methodological approach to assessing dangerous capabilities and safety properties of frontier models.

Key Points

  • METR focuses on evaluating frontier AI models for dangerous or concerning capabilities
  • Provides structured analysis framework for assessing AI safety-relevant behaviors
  • Likely covers autonomous capabilities, task completion, and potential for misuse
  • Supports informed decision-making for AI developers and policymakers on deployment thresholds
  • Part of METR's broader work on standardized AI safety evaluations

Cited by 4 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 20266 KB
[![METR Logo](https://metr.org/assets/images/logo/logo.svg)](https://metr.org/)

- [Research](https://metr.org/research)
- [Notes](https://metr.org/notes)
- [Updates](https://metr.org/blog)
- [About](https://metr.org/about)
- [Donate](https://metr.org/donate)
- [Careers](https://metr.org/careers)

Menu

### Published company policies

Our indexing these documents should not be considered an endorsement of their substance. Our hope is that making what has been published accessible enables more dialogue, comparison, and critique.

[![Anthropic logo](https://metr.org/assets/images/logo-anthropic.svg)\\
\\
Responsible Scaling Policy, v3.0\\
\\
\\
\\
February 24, 2026Feb 24, 2026](https://anthropic.com/responsible-scaling-policy/rsp-v3-0)

[Responsible Scaling Policy\\
\\
v2.2\\
\\
May 2025\\
\\
View](https://www-cdn.anthropic.com/872c653b2d0501d6ab44cf87f43e1dc4853e4d37.pdf) [Responsible Scaling Policy\\
\\
v2.1\\
\\
Mar 2025\\
\\
View](https://www-cdn.anthropic.com/17310f6d70ae5627f55313ed067afc1a762a4068.pdf) [Responsible Scaling Policy\\
\\
v2.0\\
\\
Oct 2024\\
\\
View](https://assets.anthropic.com/m/24a47b00f10301cd/original/Anthropic-Responsible-Scaling-Policy-2024-10-15.pdf) [Responsible Scaling Policy\\
\\
v1.0\\
\\
Sep 2023\\
\\
View](https://www-cdn.anthropic.com/1adf000c8f675958c2ee23805d91aaade1cd4613/responsible-scaling-policy.pdf)

[Frontier Compliance Framework\\
\\
Mar 2026\\
\\
View](https://trust.anthropic.com/doc/trust?rid=69a61159be46d1990abe8f3a&r=iz673w96495gyjer8h78n)

[![OpenAI logo](https://metr.org/assets/images/logo-openai.svg)\\
\\
Preparedness Framework, v2.0\\
\\
\\
\\
April 15, 2025Apr 15, 2025](https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf)

[Preparedness Framework\\
\\
Beta\\
\\
Dec 2023\\
\\
View](https://cdn.openai.com/openai-preparedness-framework-beta.pdf)

[![Google DeepMind logo](https://metr.org/assets/images/logo-deepmind.svg)\\
\\
Frontier Safety Framework, v3.0\\
\\
\\
\\
September 22, 2025Sep 22, 2025](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/strengthening-our-frontier-safety-framework/frontier-safety-framework_3.pdf)

[Frontier Safety Framework\\
\\
v2.0\\
\\
Feb 2025\\
\\
View](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/updating-the-frontier-safety-framework/Frontier%20Safety%20Framework%202.0.pdf) [Frontier Safety Framework\\
\\
v1.0\\
\\
May 2024\\
\\
View](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/introducing-the-frontier-safety-framework/fsf-technical-report.pdf)

[![Meta logo](https://metr.org/assets/images/logo-meta.svg)\\
\\
Frontier AI Framework\\
\\
\\
February 3, 2025Feb 3, 2025](https://ai.meta.com/static-resource/meta-frontier-ai-framework/)

[![Microsoft logo](https://metr.org/assets/images/logo-microsoft.svg)\\
\\
Frontier Governance Framework\\
\\
\\
February 8, 2025Feb 8, 2025](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/do

... (truncated, 6 KB total)
Resource ID: 7e3b7146e1266c71 | Stable ID: YzdmMmVhOG