Skip to content
Longterm Wiki
Back

[2307.03718] Frontier AI Regulation: Managing Emerging Risks to Public Safety

paper

Authors

Markus Anderljung·Joslyn Barnhart·Anton Korinek·Jade Leung·Cullen O'Keefe·Jess Whittlestone·Shahar Avin·Miles Brundage·Justin Bullock·Duncan Cass-Beggs·Ben Chang·Tantum Collins·Tim Fist·Gillian Hadfield·Alan Hayes·Lewis Ho·Sara Hooker·Eric Horvitz·Noam Kolt·Jonas Schuett·Yonadav Shavit·Divya Siddarth·Robert Trager·Kevin Wolf

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

An arXiv preprint (ID 2307.03718) that likely presents original research relevant to AI safety; the specific contribution cannot be determined without access to the full paper title and abstract.

Paper Details

Citations
162
9 influential
Year
2023

Metadata

arxiv preprintprimary source

Abstract

Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model's capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of frontier models are needed: (1) standard-setting processes to identify appropriate requirements for frontier AI developers, (2) registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and (3) mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models. Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them. We consider several options to this end, including granting enforcement powers to supervisory authorities and licensure regimes for frontier AI models. Finally, we propose an initial set of safety standards. These include conducting pre-deployment risk assessments; external scrutiny of model behavior; using risk assessments to inform deployment decisions; and monitoring and responding to new information about model capabilities and uses post-deployment. We hope this discussion contributes to the broader conversation on how to balance public safety risks and innovation benefits from advances at the frontier of AI development.

Cited by 1 page

PageTypeQuality
Structured Access / API-OnlyApproach91.0

1 FactBase fact citing this source

Cached Content Preview

HTTP 200Fetched Feb 22, 20267 KB
[2307.03718] Frontier AI Regulation: Managing Emerging Risks to Public Safety 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
--> 

 
 
 Computer Science > Computers and Society

 

 
 arXiv:2307.03718 (cs)
 
 
 
 
 
 [Submitted on 6 Jul 2023 ( v1 ), last revised 7 Nov 2023 (this version, v4)] 
 Title: Frontier AI Regulation: Managing Emerging Risks to Public Safety

 Authors: Markus Anderljung , Joslyn Barnhart , Anton Korinek , Jade Leung , Cullen O'Keefe , Jess Whittlestone , Shahar Avin , Miles Brundage , Justin Bullock , Duncan Cass-Beggs , Ben Chang , Tantum Collins , Tim Fist , Gillian Hadfield , Alan Hayes , Lewis Ho , Sara Hooker , Eric Horvitz , Noam Kolt , Jonas Schuett , Yonadav Shavit , Divya Siddarth , Robert Trager , Kevin Wolf View a PDF of the paper titled Frontier AI Regulation: Managing Emerging Risks to Public Safety, by Markus Anderljung and 23 other authors 
 View PDF 

 
 Abstract: Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model's capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of frontier models are needed: (1) standard-setting processes to identify appropriate requirements for frontier AI developers, (2) registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and (3) mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models. Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them. We consider several options to this end, including granting enforcement powers to supervisory authorities and licensure regimes for frontier AI models. Finally, we propose an initial set of safety standards. These include conducting pre-deployment risk assessments; external scrutiny of model behavior; using risk assessments to inform deployment decisions; and monitoring and responding to new information about model capabilities and uses post-deployment. We hope this discussion contributes to the broader conversation on how to balance public safety risks and innovation benefits from advances at the frontier of AI development.
 

 
 
 
 Comments: 
 Update July 11th: - Added missing footnote back in. - Adjusted author order (mistakenly non-alphabetical among the first 6 authors) and adjusted 

... (truncated, 7 KB total)
Resource ID: kb-b475e8a9fafdcd6b | Stable ID: sid_9KqAimcGdw