Skip to content
Longterm Wiki
Back

Anthropic Research Team, "Constitutional AI: Harmlessness from AI Feedback," arXiv, December 2022

paper

Authors

Yuntao Bai·Saurav Kadavath·Sandipan Kundu·Amanda Askell·Jackson Kernion·Andy Jones·Anna Chen·Anna Goldie·Azalia Mirhoseini·Cameron McKinnon·Carol Chen·Catherine Olsson·Christopher Olah·Danny Hernandez·Dawn Drain·Deep Ganguli·Dustin Li·Eli Tran-Johnson·Ethan Perez·Jamie Kerr·Jared Mueller·Jeffrey Ladish·Joshua Landau·Kamal Ndousse·Kamile Lukosuite·Liane Lovitt·Michael Sellitto·Nelson Elhage·Nicholas Schiefer·Noemi Mercado·Nova DasSarma·Robert Lasenby·Robin Larson·Sam Ringer·Scott Johnston·Shauna Kravec·Sheer El Showk·Stanislav Fort·Tamera Lanham·Timothy Telleen-Lawton·Tom Conerly·Tom Henighan·Tristan Hume·Samuel R. Bowman·Zac Hatfield-Dodds·Ben Mann·Dario Amodei·Nicholas Joseph·Sam McCandlish·Tom Brown·Jared Kaplan

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Foundational paper introducing Constitutional AI, a novel training method that uses AI-generated feedback and principles-based alignment to reduce harmful outputs without extensive human labeling, addressing key AI safety challenges in scalable oversight.

Paper Details

Citations
2,658
211 influential
Year
2022

Metadata

arxiv preprintprimary source

Abstract

As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.

Summary

Constitutional AI (CAI) is a method for training harmless AI assistants through self-improvement without human labels for harmful outputs. The approach uses a constitution—a set of principles or rules—to guide AI behavior. It involves two phases: a supervised learning phase where models critique and revise their own outputs, and a reinforcement learning phase using AI feedback (RLAIF) to train a preference model as a reward signal. The resulting RL-CAI assistant is non-evasive, engages with harmful queries by explaining objections, and outperforms models trained with traditional human feedback, while requiring significantly fewer human labels and enabling more transparent, controllable AI behavior.

Cited by 2 pages

PageTypeQuality
Daniela AmodeiPerson21.0
AI Output FilteringApproach63.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202698 KB
[2212.08073] Constitutional AI: Harmlessness from AI Feedback 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 Constitutional AI: Harmlessness from AI Feedback

 
 
 
Yuntao Bai,   Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, & Andy Jones,
Anna Chen,
Anna Goldie,
Azalia Mirhoseini,
Cameron McKinnon, 
 Correspondence to: {yuntao,jared}@anthropic.com 
 Author contributions are detailed in 7 . 
    
 Carol Chen,
Catherine Olsson,
Christopher Olah,
Danny Hernandez,
Dawn Drain, 
 
    
 Deep Ganguli,
Dustin Li,
Eli Tran-Johnson,
Ethan Perez,
Jamie Kerr,
Jared Mueller, 
 
    
 Jeffrey Ladish,
Joshua Landau,
Kamal Ndousse,
Kamile Lukosuite,
Liane Lovitt, 
 
    
 Michael Sellitto,
Nelson Elhage,
Nicholas Schiefer,
Noemi Mercado,
Nova DasSarma, 
 
    
 Robert Lasenby,
Robin Larson,
Sam Ringer,
Scott Johnston,
Shauna Kravec, 
 
    
 Sheer El Showk,
Stanislav Fort,
Tamera Lanham,
Timothy Telleen-Lawton,
Tom Conerly, 
 
    
 Tom Henighan,
Tristan Hume,
Samuel R. Bowman,
Zac Hatfield-Dodds,
Ben Mann, 
 
    
 Dario Amodei,
Nicholas Joseph, Sam McCandlish, Tom Brown,
Jared Kaplan † † footnotemark: \AND 
 Anthropic 
 
 

 
 Abstract

 As AI systems become more capable, we would like to enlist their help to supervise other AIs.
We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as ‘Constitutional AI’. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use ‘RL from AI Feedback’ (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels. 

 
 
 
 
 
 1 Introduction

 
 We would like to train AI systems that remain helpful, honest, and harmless, even as some AI capabilities reach or exceed human-level performance. This suggests that we will need to develop techniques that do not rely on humans to supervise all aspects of AI behavior, and that can be used to automatically test and enhance robustness to harmful behaviors. We also aim to develop methods that encode desirable AI behavior in a simple and transparent form, and that make it easier to understand and eval

... (truncated, 98 KB total)
Resource ID: b3e647be3bc180f4 | Stable ID: YmUxZGU5Ym