Skip to content
Longterm Wiki
Back

Centre for Long-Term Resilience: AI Safety Frameworks Risk Governance

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Long-Term Resilience

Published by the Centre for Long-Term Resilience (CLTR), a UK-based think tank focused on catastrophic risk; this report is relevant to policymakers and researchers working on AI governance frameworks and institutional risk management.

Metadata

Importance: 62/100organizational reportanalysis

Summary

This report from the Centre for Long-Term Resilience analyzes AI safety frameworks and risk governance structures, examining how governments and organizations can effectively manage the risks posed by advanced AI systems. It likely provides policy recommendations and evaluates existing governance mechanisms for ensuring safe AI development and deployment.

Key Points

  • Examines existing AI safety frameworks from major AI developers and how they address risk governance challenges
  • Analyzes the role of government and regulatory bodies in overseeing AI development and deployment
  • Provides recommendations for strengthening risk governance structures to address both near-term and long-term AI risks
  • Considers how safety frameworks can be made more robust, accountable, and enforceable across jurisdictions
  • Bridges technical AI safety concerns with practical policy and governance implementation strategies

Cached Content Preview

HTTP 200Fetched Mar 20, 202657 KB
# Why frontier AI safety frameworks need to include risk governance

Ben Robinson, Malcolm Murray, James Ginns, Marta Krzeminska February 2025

[www.longtermresilience.org](http://www.longtermresilience.org/)

# Why frontier AI safety frameworks need to include risk governance

Ben Robinson , Malcom Murray, James Ginns, Marta Krzeminska 1

The Centre for Long-Term Resilience \| February 2025

Contents

Executive Summary 2

1. Background 3

2. Risk governance components 7


Analysis of current frontier safety frameworks 9

4. Recommendations 11

Appendices 14

# Executive Summary

Frontier safety frameworks have emerged as an important upstream tool for AI companies to manage extreme risks from AI. These frameworks aim to set thresholds for powerful AI models, and specify mitigations if these thresholds are reached (re-training, delaying deployment etc).

Currently, few of these frameworks include components of effective risk governance, understood as the structure, policies and processes for holistically managing risk across an organisation. This reduces the overall effectiveness of these frameworks, potentially creating silos, blurred accountabilities and unclear decision making processes when risk thresholds are being approached or reached, thereby increasing the chance of harmful models being released.

This report outlines why risk governance should be enhanced in future versions of safety frameworks, along with recommendations for AI companies and Governments in the near and longer term. Key contributions include:

Analysis of safety frameworks with and without risk governance (Table 1, p. 5) Overview of risk governance components (Table 2, p. 6) and whether currently published frameworks have evidence of these components (p. 9 and Appendix 1) Recommendations for AI companies and Governments (p. 11 and below), drawing from best practice in other industries (Appendix 2).

# Recommendations for AI Companies

1. Implement a ‘Minimum Viable Product’ (MVP) of best practice risk governance. This should include: a) risk ownership, b) a sub-committee of the Board, c) challenging and advising first line, d) external audit, e) a central risk function, f) executive risk officer, and g) risk tracking and monitoring.
2. Self-assess current safety framework and risk management practices against the risk governance components provided (Table 2, p. 6), identifying gaps and areas for improvement. Use the recommendation matrix (Figure 2, p. 11) to complete the risk governance framework over time.

# Recommendations for Governments

Encourage and incentivise AI companies to commit to implementing risk governance at the “MVP” level, for example by requesting that companies share their risk governance processes publicly and including risk governance in evaluations of safety frameworks. 2. Require AI companies to implement two key components that are essential for external clarity on the efficacy of the risk management process and transparency in the risks that a

... (truncated, 57 KB total)
Resource ID: f5a4c47e88f6fd09 | Stable ID: MGQyMDAzZD