Back
Strengthening Resilience to AI Risk - CETaS
webcetas.turing.ac.uk·cetas.turing.ac.uk/publications/strengthening-resilience-...
Published by CETaS (Alan Turing Institute) and the Centre for Long-Term Resilience, this 2024 UK-focused policy briefing is relevant to AI governance practitioners and policymakers seeking structured frameworks for national-level AI risk management.
Metadata
Importance: 58/100organizational reportanalysis
Summary
A briefing paper from CETaS and the Centre for Long-Term Resilience presenting a structured framework for the UK Government to understand and respond to AI risks across the full AI lifecycle. It proposes policy interventions organized around three goals: creating visibility and understanding, promoting best practices, and establishing incentives and enforcement. The paper argues the UK must act decisively to avoid both widespread AI harms and under-adoption due to fear.
Key Points
- •Presents an AI lifecycle framework identifying risk pathways at design/training, deployment, and long-term diffusion stages with corresponding policy responses.
- •Proposes three policy goals: creating visibility and understanding, promoting best practices, and establishing incentives and enforcement mechanisms.
- •Warns of two failure modes: widespread AI-caused harms OR excessive fear leading to under-adoption and missed societal benefits.
- •Argues UK must demonstrate competence in AI governance to lead global multilateral mechanisms for transparency, standards, and accountability.
- •Emphasizes that risks should be addressed as close to their source as possible in the AI development and deployment pipeline.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Centre for Long-Term Resilience | Organization | 63.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20268 KB
[Skip to main content](https://cetas.turing.ac.uk/publications/strengthening-resilience-ai-risk#main-content)
## Abstract
This Briefing Paper from CETaS and the [Centre for Long-Term Resilience](https://www.longtermresilience.org/) aims to provide a clear framework to inform the UK Government’s approach to understanding and responding to the risks posed by Artificial Intelligence (AI). The Government has shown increasing ambition to take a globally leading role in AI safety, but currently the UK is inadequately resilient to the risks posed by AI. Now is the time to act decisively on the policy interventions required to address those risks.
Any further delay will risk one of two undesirable outcomes: either a scenario where AI risks transition into widespread harms, directly impacting individuals and groups in society; or the converse scenario where widespread fear of AI risk results in a lack of adoption, meaning the UK does not benefit from the many societal benefits presented by these technologies. This paper addresses this challenge by presenting an evidence-based, structured framework for identifying AI risks and associated policy responses.
For the UK to foster a trustworthy AI ecosystem, policymakers must demonstrate both an understanding of and capacity to intervene across the AI lifecycle. This entails addressing risk pathways at their source in the design and training stages, mitigating deployment risks through implementation of clear safeguards, and redressing harmful impacts over the longer-term diffusion of AI systems across society.
We suggest a framework for understanding how risks can arise across different stages of the lifecycle of AI systems and their potential impacts. Policy recommendations will be clearly linked to these stages to ensure risks are targeted and redressed as close to their source as possible. We propose three main goals for policy interventions: creating visibility and understanding; promoting best practices; and establishing incentives and enforcement.
By demonstrating competence and commitment as well as ambition across these policy areas, the UK can establish its status as a leading voice in global discussions on AI risk and governance. Achieving this status will allow the UK to push for multilateral mechanisms which prioritise transparency and collective action, to coordinate global standards in high-risk areas of development and deployment, and to hold individual governments and private actors accountable for harmful applications of AI.
_This publication is licensed under the terms of the_ [_Creative Commons Attribution License 4.0_](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) _which permits unrestricted use, provided the original authors and source are credited._
## Watch the Panel Discussion
Strengthening Resilience to AI Risk - CETaS Annual Showcase 2024 - YouTube
[Photo image of The Alan Turing Institute](https://www.youtube.com/channel/UCcr5vuAH5TPlYox-QLj4ySw?embeds_referri
... (truncated, 8 KB total)Resource ID:
2aa76b16de64707e | Stable ID: OGExNDVhOT