METR: Responsible Scaling Policies
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: METR
METR (formerly ARC Evals) is a key organization in AI safety evaluations; this post originated much of the RSP discourse later adopted by Anthropic, Google DeepMind, and others, and is a foundational reference for understanding the RSP/ASL policy landscape.
Metadata
Summary
METR introduces and advocates for Responsible Scaling Policies (RSPs), a framework requiring AI developers to specify capability thresholds at which it becomes unsafe to continue scaling or deployment without improved protective measures. The post outlines five key components of a good RSP: limits, protections, evaluation, response, and accountability. RSPs are positioned as a near-term voluntary safety commitment that can serve as a testbed for future evaluations-based regulatory regimes.
Key Points
- •RSPs define when AI capabilities become too dangerous to scale further without improved safeguards, creating a structured gate on development.
- •The framework has broad appeal: it supports halting development if dangerous capabilities emerge, and continuing if evaluations show risks haven't materialized.
- •Five key RSP components: capability limits, protective measures, evaluation procedures, response plans, and accountability mechanisms.
- •METR views RSPs as a first step—not a substitute for regulation—designed to create norms and test processes that can inform future AI governance.
- •Voluntary RSPs can be adopted quickly and help labs move from abstract safety principles to concrete, measurable commitments tied to evals.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Why Alignment Might Be Easy | Argument | 53.0 |
Cached Content Preview
[](https://metr.org/)
- [Research](https://metr.org/research)
- [Notes](https://metr.org/notes)
- [Updates](https://metr.org/blog)
- [About](https://metr.org/about)
- [Donate](https://metr.org/donate)
- [Careers](https://metr.org/careers)
Menu
We’ve been consulting with several parties\[1\][1](https://metr.org/blog/2023-09-26-rsp/#fn:1) on **responsible scaling\[2\][2](https://metr.org/blog/2023-09-26-rsp/#fn:2) policies (RSPs)**. An RSP specifies what level of AI capabilities an AI developer is prepared to handle safely with their current protective measures, and conditions under which it would be too dangerous to continue deploying AI systems and/or scaling up AI capabilities until protective measures improve.
We think that widespread adoption of high-quality RSPs (that include all the key components we detail below) would significantly reduce risks of an AI catastrophe relative to “business as usual”. However, we expect voluntary commitments will be insufficient to adequately contain risks from AI. RSPs were designed as a first step safety-concerned labs could take themselves, rather than designed for policymakers; we don’t intend them as a substitute for regulation, now or in the future.\[3\][3](https://metr.org/blog/2023-09-26-rsp/#fn:3)
This page will explain the basic idea of RSPs as we see it, then discuss:
**Why we think RSPs are promising.** In brief (more below):
- **Broad appeal.** RSPs are intended to appeal to both (a) those who think AI could be extremely dangerous and seek things like [moratoriums on AI development](https://futureoflife.org/open-letter/pause-giant-ai-experiments/), and (b) those who think that it’s too early to worry. Under both of these views, RSPs are a promising intervention: by committing to gate scaling on concrete evaluations and empirical observations, then (if the evaluations are sufficiently accurate!) we should expect to _halt_ AI development in cases where we do see dangerous capabilities, and _continue_ it in cases where worries about dangerous capabilities have not yet emerged.
- **Knowing which protective measures to prioritize.** RSPs can help AI developers move from broad caution-oriented principles to specific commitments, giving a framework for which protective measures (information security; refusing harmful requests; alignment research; etc.) they need to prioritize to continue development and deployment.
- **Evals-based rules and norms.** In the longer term, we’re excited about [evals](https://metr.org/)-based AI rules and norms more generally: requirements that AI systems be evaluated for dangerous capabilities, and restricted when that’s the only way to contain the risks (until protective measures improve). This could include standards, third-party audits, and regulation. Voluntary RSPs can happen quickly, and provide a testbed for processes and techniques that can be adopted to make future evals-based regulatory regimes work w
... (truncated, 23 KB total)73bedb360b0de6ae | Stable ID: NTI2OGFjZG