The Case for Targeted Regulation
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Anthropic
Published by Anthropic in October 2024, this policy statement reflects a leading AI lab's public position on regulatory urgency, citing internal red-team findings and benchmark progress to support calls for targeted government intervention within an 18-month window.
Metadata
Summary
Anthropic argues that governments have an urgent 18-month window to implement narrowly-targeted AI regulation to mitigate catastrophic risks—particularly in cyber and CBRN domains—while preserving innovation. The post outlines principles for risk-focused policy, citing rapid capability advances as evidence that surgical regulation is needed soon.
Key Points
- •AI capabilities in coding, reasoning, and scientific knowledge have advanced dramatically in 2023-2024, closing the window for proactive risk prevention.
- •Current frontier models already show meaningful uplift for cyber offense tasks and CBRN-relevant scientific knowledge, with risks expected to grow.
- •Governments should adopt targeted, narrowly-scoped regulation rather than broad restrictions, to avoid both under- and over-regulation.
- •Anthropic's Responsible Scaling Policy is presented as a model for industry self-governance complementing government action.
- •The post warns that delayed action risks poorly-designed reactive regulation that harms innovation without effectively preventing catastrophic risks.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | Policy | 66.0 |
Cached Content Preview
Policy
# The case for targeted regulation
Oct 31, 2024
Increasingly powerful AI systems have the potential to [accelerate scientific progress](https://darioamodei.com/machines-of-loving-grace), unlock new medical treatments, and grow the economy. But along with the remarkable new capabilities of these AIs come significant risks. **Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast.**
Judicious, narrowly-targeted regulation can allow us to get the best of both worlds: realizing the benefits of AI while mitigating the risks. Dragging our feet might lead to the worst of both worlds: poorly-designed, knee-jerk regulation that hampers progress while also failing to be effective at preventing risks.
In this post, we suggest some principles for how governments can meaningfully reduce catastrophic risks while supporting innovation in AI’s thriving scientific and commercial sectors.
## Urgency
In the last year, AI systems have grown dramatically better at math, graduate-level reasoning, and computer coding, along with many other capabilities. Inside AI companies, we see continued progress on as-yet undisclosed systems and results. These advances offer many positive applications. But progress in these same broad capabilities also brings with it the potential for destructive applications, either from the misuse of AI in domains such as cybersecurity or biology, or from the accidental or autonomous behavior of the AI system itself.
In the realm of cyber capabilities, models have rapidly advanced on a broad range of coding tasks and cyber offense evaluations. On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems ( [Claude 2, October 2023](https://arxiv.org/abs/2310.06770)) to 13.5% ( [Devin, March 2024](https://www.cognition.ai/blog/swe-bench-technical-report)) to 49% ( [Claude 3.5 Sonnet, October 2024](https://www.anthropic.com/news/3-5-models-and-computer-use)). Internally, our Frontier Red Team has found that current models can already assist on a broad range of cyber offense-related tasks, and we expect that the next generation of models—which will be able to plan over long, multi-step tasks—will be even more effective.
On the potential for AI exacerbating CBRN (chemical, biological, radiological, and nuclear) misuses, the UK AI Safety Institute tested a range of models from industry actors (including Anthropic) and [concluded](https://www.aisi.gov.uk/work/advanced-ai-evaluations-may-update) that:
> ...models can be used to obtain expert-level knowledge about biology and chemistry. For several models, replies to science questions were on par with those given by PhD-level experts.
AI systems have progressed dramatically in their understanding of the sciences in the last year. The widely used benchmark GPQA saw scores on its hardest section [grow from 38.8%](https://arxiv.org
... (truncated, 18 KB total)30abee525bed38ad | Stable ID: MPQ8xKIWcA