Skip to content
Longterm Wiki
Back

Towards Guaranteed Safe AI

paper

Authors

David "davidad" Dalrymple·Joar Skalse·Yoshua Bengio·Stuart Russell·Max Tegmark·Sanjit Seshia·Steve Omohundro·Christian Szegedy·Ben Goldhaber·Nora Ammann·Alessandro Abate·Joe Halpern·Clark Barrett·Ding Zhao·Tan Zhi-Xuan·Jeannette Wing·Joshua Tenenbaum

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A foundational paper introducing guaranteed safe (GS) AI approaches that aim to produce AI systems with high-assurance quantitative safety guarantees, addressing the critical challenge of ensuring AI systems reliably avoid harmful behaviors especially in autonomous and safety-critical contexts.

Paper Details

Citations
0
8 influential
Year
2025

Metadata

arxiv preprintprimary source

Abstract

Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.

Summary

This paper introduces Guaranteed Safe (GS) AI, an approach to AI safety that aims to equip AI systems with high-assurance quantitative safety guarantees. The framework operates through three core components: a world model (mathematical description of how the AI system affects the world), a safety specification (mathematical description of acceptable effects), and a verifier (providing auditable proof that the AI satisfies the safety specification). The authors outline approaches for creating each component, discuss technical challenges, and argue for the necessity of this approach over alternative safety methods.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
# Towards Guaranteed Safe AI:   A Framework for Ensuring Robust and Reliable AI Systems

David “davidad” Dalrymple
Joar Skalse
Yoshua Bengio
Stuart Russell
Max Tegmark
Sanjit Seshia
Steve Omohundro
Christian Szegedy
Ben Goldhaber
Nora Ammann
Alessandro Abate
Joe Halpern
Clark Barrett
Ding Zhao
Tan Zhi-Xuan
Jeannette Wing
Joshua Tenenbaum

###### Abstract

Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as _guaranteed safe_ (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with _high-assurance quantitative safety guarantees_. This is achieved by the interplay of three core components: a _world model_ (which provides a mathematical description of how the AI system affects the outside world), a _safety specification_ (which is a mathematical description of what effects are acceptable), and a _verifier_
(which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.

Machine Learning, ICML

## 1 Introduction

We introduce and define a family of approaches to AI safety, collectively referred to as _guaranteed safe_ (GS) AI. These approaches aim to provide high-assurance quantitative guarantees about the safety of an AI system’s behaviour through the use of three core components –- a formal _safety specification_, a _world model_, and a _verifier_. We will argue that this strategy is both promising and underexplored, and contrast it with other ongoing efforts in AI safety. We will also outline several ongoing avenues of research within the broader GS research agenda, identify some of their core difficulties, and discuss approaches for overcoming these difficulties. Central examples of agendas which fall under the GS AI family include Szegedy ( [2020](https://ar5iv.labs.arxiv.org/html/2405.06624#bib.bib171 "")); Wing ( [2021](https://ar5iv.labs.arxiv.org/html/2405.06624#bib.bib186 "")); Seshia et al. ( [2022](https://ar5iv.labs.arxiv.org/html/2405.06624#bib.bib158 "")); Russell ( [2022](https://ar5iv.labs.arxiv.org/html/2405.06624#bib.bib148 "")); Tegmark & Omohundro ( [2023](https://ar5iv.labs.arxiv.org/html/2405.06624#bib.bib175 "")); ’davidad’ Dalrymple ( [2024](https://ar5iv.labs.arxiv.org/html/2405.06624#bib.bib45 "")); Bengio ( [2024](https://ar5iv.labs.arxiv.org/html/2405.06624#bib.bib21 "")).

Critical infrastructure and safety-critical systems are requ

... (truncated, 98 KB total)
Resource ID: d8da577aed1e4384 | Stable ID: MTRhNTAwYj