Skip to content
Longterm Wiki
Back

AI - Centre for Long-Term Resilience

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Long-Term Resilience

CLTR is a UK policy-focused think tank relevant to those tracking government engagement with AI safety; useful for understanding the UK's policy landscape and civil society efforts to bridge technical AI safety and institutional governance.

Metadata

Importance: 45/100homepage

Summary

The Centre for Long-Term Resilience (CLTR) is a UK-based think tank focused on extreme risks, including AI safety and governance. Their AI program works to inform UK and international policy on safe and beneficial AI development, bridging technical research and policymaking.

Key Points

  • CLTR engages with UK government and international institutions to shape AI safety policy and governance frameworks
  • Focuses on translating technical AI safety research into actionable policy recommendations
  • Works on identifying and mitigating extreme risks from advanced AI systems
  • Aims to build institutional capacity and political will for responsible AI governance
  • Operates at the intersection of AI safety research, policy advocacy, and long-term risk reduction

Cited by 1 page

PageTypeQuality
Centre for Long-Term ResilienceOrganization63.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20265 KB
![](https://www.longtermresilience.org/ai/)

Topic/Area:

# Artificial Intelligence

Mitigating extreme risks from AI through sound policymaking

## Introduction

AI systems could pose a number of large-scale extreme risks to society. These include severe misuse in bioweapon development or disinformation, societal harms such as power concentration or threats to democracy, or key aspects of society being increasingly controlled by insufficiently trustworthy AI systems.

We work with the UK Government and wider Artificial Intelligence policy community to develop and implement best-practice governance recommendations to protect against these risks while enabling the benefits of AI.

![Jess Whittlestone speaking at UK Artificial Intelligence Policy summit](https://www.longtermresilience.org/wp-content/uploads/2024/09/AI-Summit-Jess-Whittlestone.jpg)

![Panel at Artificial Intelligence policy summit](https://www.longtermresilience.org/wp-content/uploads/2024/09/TBI-panel-Jess-Whittlestone-AI.jpg)

## Current focus areas

- Supporting the development of frontier AI regulation
- Research on open source and misuse risks
- Applying best practices risk management and governance to AI companies
- Mitigating chronic and societal AI risks and building broader societal resilience
- UK Government coordination in response to AI risks and incidents

### Featured Work

![](https://www.longtermresilience.org/wp-content/uploads/2024/07/category-ai.jpg)

### Artificial Intelligence

### How the UK Government can govern the risk of loss of control

* * *

Feb 3, 2026

[Read More](https://www.longtermresilience.org/reports/how-the-uk-government-can-govern-the-risk-of-loss-of-control/ "How the UK Government can govern the risk of loss of control")

[View All Work](https://www.longtermresilience.org/reports/?_reports_filter=artificial-intelligence)

### [The Loss of Control Observatory: a prototype to detect real-world AI control incidents](https://www.longtermresilience.org/reports/the-loss-of-control-observatory-a-prototype-to-detect-real-world-ai-control-incidents/)

* * *

CLTR is developing a new methodology to systematically detect and analyse concerning autonomous behaviours, as part of a broader programme of work on

Feb 2, 2026

### [Securing a seat at the table: pathways for advancing the UK’s global leadership in frontier AI governance](https://www.longtermresilience.org/reports/advancing-the-uks-global-leadership-in-frontier-ai-governance/)

* * *

How the UK can strengthen and differentiate its voice in the international AI conversation.

Dec 15, 2025

### [Preparing for AI security incidents](https://www.longtermresilience.org/reports/preparing-for-ai-security-incidents/)

* * *

Improving emergency preparedness with the UK AI bill and beyond

Sep 30, 2025

[Download](https://www.longtermresilience.org/wp-content/uploads/2025/02/Preparing-for-AI-security-incidents_-Improving-emergency-preparedness-with-the-UK-AI-bill-and-beyond.pdf)

## What we want to see


... (truncated, 5 KB total)
Resource ID: fd7d9319683a83fb | Stable ID: NmI4MDE2Yz