Skip to content
Longterm Wiki
Back

Barrett, A.M., Hendrycks, D., Newman, J., & Nonnecke, B.

paper

Authors

Anthony M. Barrett·Dan Hendrycks·Jessica Newman·Brandie Nonnecke

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

This paper is directly relevant to AI governance practitioners and policymakers, as it was developed to inform NIST's AI RMF—a major U.S. standards framework—and represents one of the few efforts to operationalize catastrophic risk considerations within institutional risk management processes.

Paper Details

Citations
21
1 influential
Year
2022

Metadata

Importance: 68/100arxiv preprintprimary source

Abstract

Artificial intelligence (AI) systems can provide many beneficial capabilities but also risks of adverse events. Some AI systems could present risks of events with very high or catastrophic consequences at societal scale. The US National Institute of Standards and Technology (NIST) has been developing the NIST Artificial Intelligence Risk Management Framework (AI RMF) as voluntary guidance on AI risk assessment and management for AI developers and others. For addressing risks of events with catastrophic consequences, NIST indicated a need to translate from high level principles to actionable risk management guidance. In this document, we provide detailed actionable-guidance recommendations focused on identifying and managing risks of events with very high or catastrophic consequences, intended as a risk management practices resource for NIST for AI RMF version 1.0 (released in January 2023), or for AI RMF users, or for other AI risk management guidance and standards as appropriate. We also provide our methodology for our recommendations. We provide actionable-guidance recommendations for AI RMF 1.0 on: identifying risks from potential unintended uses and misuses of AI systems; including catastrophic-risk factors within the scope of risk assessments and impact assessments; identifying and mitigating human rights harms; and reporting information on AI risk factors including catastrophic-risk factors. In addition, we provide recommendations on additional issues for a roadmap for later versions of the AI RMF or supplementary publications. These include: providing an AI RMF Profile with supplementary guidance for cutting-edge increasingly multi-purpose or general-purpose AI. We aim for this work to be a concrete risk-management practices contribution, and to stimulate constructive dialogue on how to address catastrophic risks and associated issues in AI standards.

Summary

This paper translates high-level AI safety principles into concrete risk management practices to support NIST's AI Risk Management Framework (AI RMF). It provides recommendations for identifying unintended uses and misuses, incorporating catastrophic-risk factors into assessments, addressing human rights harms, and improving risk reporting. The work aims to establish practical standards for managing existential and societal-scale AI risks within governance frameworks.

Key Points

  • Translates abstract AI safety principles into actionable guidance compatible with NIST's AI RMF, bridging theory and institutional practice.
  • Provides specific recommendations for identifying unintended uses and misuses of AI systems during risk assessments.
  • Incorporates catastrophic and existential risk factors—often neglected—into standard risk management methodologies.
  • Addresses human rights harms as a component of AI risk, connecting safety concerns to broader ethical and legal frameworks.
  • Aims to stimulate dialogue on how catastrophic AI risks should be systematically addressed in national and international governance frameworks.

Cited by 1 page

PageTypeQuality
Dan HendrycksPerson19.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20266 KB
[2206.08966] Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
--> 

 
 
 Computer Science > Computers and Society

 

 
 arXiv:2206.08966 (cs)
 
 
 
 
 
 [Submitted on 17 Jun 2022 ( v1 ), last revised 23 Feb 2023 (this version, v3)] 
 Title: Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks

 Authors: Anthony M. Barrett , Dan Hendrycks , Jessica Newman , Brandie Nonnecke View a PDF of the paper titled Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks, by Anthony M. Barrett and 3 other authors 
 View PDF 

 
 Abstract: Artificial intelligence (AI) systems can provide many beneficial capabilities but also risks of adverse events. Some AI systems could present risks of events with very high or catastrophic consequences at societal scale. The US National Institute of Standards and Technology (NIST) has been developing the NIST Artificial Intelligence Risk Management Framework (AI RMF) as voluntary guidance on AI risk assessment and management for AI developers and others. For addressing risks of events with catastrophic consequences, NIST indicated a need to translate from high level principles to actionable risk management guidance.

In this document, we provide detailed actionable-guidance recommendations focused on identifying and managing risks of events with very high or catastrophic consequences, intended as a risk management practices resource for NIST for AI RMF version 1.0 (released in January 2023), or for AI RMF users, or for other AI risk management guidance and standards as appropriate. We also provide our methodology for our recommendations.

We provide actionable-guidance recommendations for AI RMF 1.0 on: identifying risks from potential unintended uses and misuses of AI systems; including catastrophic-risk factors within the scope of risk assessments and impact assessments; identifying and mitigating human rights harms; and reporting information on AI risk factors including catastrophic-risk factors.

In addition, we provide recommendations on additional issues for a roadmap for later versions of the AI RMF or supplementary publications. These include: providing an AI RMF Profile with supplementary guidance for cutting-edge increasingly multi-purpose or general-purpose AI.

We aim for this work to be a concrete risk-management practices contribution, and to stimulate constructive dialogue on how to address catastrophic risks and associated issues in AI standards.
 

 
 
 
 Comments: 
 56 pages; updated throughout for general consistency with NIST AI RMF 1.0 
 
 
 Subjects: 
 
 Computers and Society (cs.CY) ; Artificial Intelligence (cs.AI); Machine Learning (cs.LG) 
 
 Cite as: 
 arXiv:2206.08966 [cs.CY] 
 
 
 
 (or 
 arXiv:2206.08966v3 [cs.CY] for this v

... (truncated, 6 KB total)
Resource ID: b88263a70cbf743e | Stable ID: MmE2NWVlZW