Skip to content
Longterm Wiki

METR (Model Evaluation and Threat Research), formerly known as ARC Evals, is an organization dedicated to evaluating frontier AI models for dangerous capabilities before deployment.

Facts

2
General
Websitehttps://metr.org
Biographical
Wikipediahttps://en.wikipedia.org/wiki/Metre

Related Wiki Pages

Top Related Pages

Safety Research

Anthropic Core Views

Approaches

Dangerous Capability EvaluationsResponsible Scaling PoliciesSandboxing / Containment

Analysis

AI Compute Scaling MetricsAI Safety Intervention Effectiveness Matrix

Policy

Voluntary AI Safety CommitmentsAI Safety Institutes (AISIs)

Organizations

Alignment Research Center

Other

AI EvaluationsBeth BarnesScalable Oversight

Risks

Reward HackingAI-Induced Enfeeblement

Concepts

AI TimelinesExistential Risk from AISituational AwarenessSelf-Improvement and Recursive Enhancement

Key Debates

AI Safety Solution CruxesTechnical AI Safety Research