Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Anthropic

Foundational Anthropic paper introducing Constitutional AI, a scalable alignment technique used in Claude; highly relevant to scalable oversight, RLHF alternatives, and making AI values explicit and auditable.

Metadata

Importance: 88/100blog postprimary source

Summary

Anthropic introduces Constitutional AI (CAI), a method for training AI systems to be harmless using a set of principles (a 'constitution') and AI-generated feedback rather than relying solely on human labelers. The approach uses a two-phase process: supervised learning from AI self-critique and revision, followed by reinforcement learning from AI feedback (RLAIF). This reduces dependence on human red-teaming for harmful content while maintaining helpfulness.

Key Points

  • Introduces a 'constitution'—a set of natural language principles—that guides AI self-critique and revision to reduce harmful outputs.
  • Uses two phases: (1) supervised learning via AI self-critique/revision of harmful responses, (2) RLAIF using AI preference labels instead of human labels.
  • Aims to make the values and norms governing AI behavior more explicit, transparent, and auditable compared to purely human feedback methods.
  • Demonstrates that AI feedback can substitute for human feedback in harmlessness training, scaling oversight without proportional human labor.
  • Produces models that are both less harmful and more helpful than prior RLHF-trained models, avoiding the typical helpfulness-harmlessness tradeoff.

Cited by 2 pages

PageTypeQuality
Long-Horizon Autonomous TasksCapability65.0
AI-Human Hybrid SystemsApproach91.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20262 KB
Alignment Research Constitutional AI: Harmlessness from AI Feedback

 Dec 15, 2022 Read Paper Abstract

 As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.

 Policy Memo

 Constitutional AI Policy Memo 

 Related content

 Labor market impacts of AI: A new measure and early evidence

 Read more An update on our model deprecation commitments for Claude Opus 3

 Read more The persona selection model

 Read more Constitutional AI: Harmlessness from AI Feedback \ Anthropic
Resource ID: 02828439f34ad89c | Stable ID: MjkxMzMyNT