Skip to content
Longterm Wiki
Back

NIST AI Risk Management Framework

government

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: NIST

The NIST AI RMF is a widely referenced U.S. government standard for AI risk governance, frequently cited in policy discussions and used by organizations building internal AI safety and compliance programs; relevant to AI safety researchers tracking institutional governance approaches.

Metadata

Importance: 72/100guidance documentreference

Summary

The NIST AI RMF is a voluntary, consensus-driven framework released in January 2023 to help organizations identify, assess, and manage risks associated with AI systems while promoting trustworthiness across design, development, deployment, and evaluation. It provides structured guidance organized around core functions and is accompanied by a Playbook, Roadmap, and a Generative AI Profile (2024) addressing risks specific to generative AI systems.

Key Points

  • Voluntary framework developed through public-private collaboration, released January 2023, with translations available in Arabic and Japanese.
  • Organizes AI risk management around trustworthiness properties including safety, security, explainability, fairness, privacy, and accountability.
  • Accompanied by supplementary resources: AI RMF Playbook, Roadmap, Crosswalk, and Trustworthy & Responsible AI Resource Center launched March 2023.
  • Generative AI Profile (NIST-AI-600-1, July 2024) extends the framework to address unique risks from generative AI systems.
  • Designed to align with and build upon existing AI risk efforts internationally, serving as a baseline for organizational AI governance.

Cited by 40 pages

PageTypeQuality
Autonomous CodingCapability63.0
Large Language ModelsCapability60.0
Large Language ModelsConcept62.0
Long-Horizon Autonomous TasksCapability65.0
Persuasion and Social ManipulationCapability63.0
AGI Development--52.0
AI Capability Threshold ModelAnalysis72.0
AI Compounding Risks Analysis ModelAnalysis60.0
Corrigibility Failure PathwaysAnalysis62.0
Deceptive Alignment Decomposition ModelAnalysis62.0
Instrumental Convergence FrameworkAnalysis60.0
Intervention Timing WindowsAnalysis72.0
AI Risk Activation Timeline ModelAnalysis66.0
AI Risk Cascade Pathways ModelAnalysis67.0
AI Risk Interaction MatrixAnalysis65.0
AI Safety Research Allocation ModelAnalysis65.0
AI Safety Research Value ModelAnalysis60.0
Scheming Likelihood AssessmentAnalysis61.0
AI Risk Warning Signs ModelAnalysis70.0
Worldview-Intervention MappingAnalysis62.0
NIST and AI SafetyOrganization63.0
AI ControlResearch Area75.0
AI AlignmentApproach91.0
Colorado Artificial Intelligence ActPolicy53.0
AI Governance Coordination TechnologiesApproach91.0
Corporate AI Safety ResponsesApproach68.0
AI EvaluationApproach72.0
Failed and Stalled AI ProposalsAnalysis63.0
AI Governance and PolicyCrux66.0
AI-Human Hybrid SystemsApproach91.0
Third-Party Model AuditingApproach64.0
Red TeamingResearch Area65.0
AI Standards DevelopmentConcept69.0
AI-Driven Concentration of PowerRisk65.0
AI-Powered FraudRisk69.0
AI-Driven Institutional Decision CaptureRisk73.0
AI ProliferationRisk60.0
AI Development Racing DynamicsRisk72.0
SycophancyRisk65.0
AI Winner-Take-All DynamicsRisk54.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20264 KB
AI Risk Management Framework | NIST 
 
 
 
 

 

 
 
 
 Skip to main content
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 Official websites use .gov 
 

 A .gov website belongs to an official government organization in the United States.
 

 
 
 
 
 
 
 Secure .gov websites use HTTPS 
 

 A lock ( 
 
 Lock 
 A locked padlock 
 
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
 

 
 
 
 
 
 
 

 
 
 
 
 https://www.nist.gov/itl/ai-risk-management-framework

 
 

 

 
 
 
 
 

 

 

 
 
 
 

 
 

 
 

 

 
 
 
 Information Technology Laboratory 

 
 
 
 
 
 
 AI Risk Management Framework

 
 
 
 
 

 
 
 

 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 Quick Links 

 Download the AI RMF 1.0 View the AI RMF Playbook Visit the AI Resource Center 

 Overview of the AI RMF 

 In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. 

 Released on January 26, 2023, the Framework was developed through a consensus-driven, open, transparent, and collaborative process that included a Request for Information, several draft versions for public comments, multiple workshops , and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others ( Fact Sheet ). 

 A companion NIST AI RMF Playbook also has been published by NIST along with an AI RMF Roadmap , AI RMF Crosswalk , and various Perspectives .

 On March 30, 2023, NIST launched the Trustworthy and Responsible AI Resource Center , which will facilitate implementation of, and international alignment with, the AI RMF. Examples of how other organizations are building on and using the AI RMF can be found via the AIRC’s Use Case page . 

 On July 26, 2024, NIST released NIST-AI- 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile . The profile can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. 

 To view public comments received on the previous drafts of the AI RMF and Requests for Information, see the AI RMF Development page.

 Translations 

 NIST AI Risk Management Framework (Arabic)
 NIST AI Risk Management Framework (Japanese)
 Prior Documents 

 Second draft of the AI Risk Management Framework (August 18, 2022) 
 Initial draft of the AI Risk Management Framework (March 17, 2022)
 Concept paper to help guide development of the AI Risk Management Framework (December 13, 2021)
 Brief summary of responses to the July 29, 2021, RFI (October 15,

... (truncated, 4 KB total)
Resource ID: 54dbc15413425997 | Stable ID: N2EzYzdjMD