Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems | FAR.AI
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: FAR AI
A high-profile 2024 paper by a broad coalition of AI safety and ML researchers proposing a formal verification framework as an alternative to behavioral or empirical-only safety approaches; widely cited in technical safety discussions.
Metadata
Summary
This paper introduces the 'Guaranteed Safe (GS) AI' framework, which aims to equip AI systems with high-assurance quantitative safety guarantees through three core components: a world model, a safety specification, and a verifier. The authors outline approaches for building each component, identify key technical challenges, and argue that this formal verification-based approach is necessary while critiquing alternative safety approaches.
Key Points
- •Proposes three core components for guaranteed safe AI: world model (mathematical description of AI's effects), safety specification (acceptable effects), and verifier (proof certificate).
- •Argues existing AI safety approaches are inadequate for high-autonomy or safety-critical systems, motivating a formal, quantitative guarantees-based alternative.
- •The framework targets AI systems with high degrees of autonomy and general intelligence where robust harm avoidance is especially critical.
- •Authored by prominent researchers including Yoshua Bengio, Stuart Russell, Max Tegmark, and Joshua Tenenbaum, lending significant credibility.
- •Outlines multiple potential technical solutions and open challenges for constructing each of the three GS AI components.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| FAR AI | Organization | 76.0 |
Cached Content Preview
Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems | FAR.AI
We updated our website and would love your feedback!
Events
Events
Programs
Programs
Blog
About
About
Careers Donate
All Research
/ Robustness
Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Full PDF
Project
Source
Blog
Citation
May 10, 2024
David A. Dalrymple (davidad)
Joar Max Viktor Skalse
Yoshua Bengio
Stuart Russell
Max Tegmark
Sanjit Seshia
Steve Omohundro
Christian Szegedy
Ben Goldhaber
Nora Ammann
Alessandro Abate
Joe Halpern
Clark Barrett
Ding Zhao
Tan Zhi-Xuan
Jeannette Wing
Joshua Tenenbaum
abstract
Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.
Share on:
Research
Our research explores a portfolio
of high-potential agendas.
Events
Our events bring together
global leaders in AI.
Programs
Our programs build the field of trustworthy and secure AI
Subscribe
Subscribe to our newsletter
Organization About Team Programs News Search
Events All Events Alignment Workshops Specialized Workshops All Event Recordin
... (truncated, 4 KB total)66d05ccd31d3b5d8 | Stable ID: NDE0MzlmNm