Skip to content
Longterm Wiki
Back

CHAI Publications (Center for Human-Compatible AI)

web

CHAI is one of the leading academic institutions in AI alignment research; this publications index is a key reference for tracking peer-reviewed technical safety work coming out of UC Berkeley.

Metadata

Importance: 72/100homepage

Summary

The publications page of the Center for Human-Compatible AI (CHAI) at UC Berkeley, listing research output from Stuart Russell's group and collaborators. CHAI focuses on ensuring AI systems are provably beneficial and aligned with human values, producing foundational work on inverse reinforcement learning, assistance games, and value alignment. This page serves as a central index to CHAI's technical and theoretical contributions to AI safety.

Key Points

  • Aggregates CHAI's full research portfolio including papers on inverse reinforcement learning, cooperative AI, and value alignment
  • Home to foundational alignment concepts like assistance games (formerly cooperative inverse reinforcement learning) developed by Russell and colleagues
  • Research spans both technical safety (reward learning, corrigibility) and broader questions about human-compatible AI design
  • Includes work by prominent researchers such as Stuart Russell, Pieter Abbeel, Anca Dragan, and collaborators
  • Represents one of the most influential academic groups producing peer-reviewed AI safety and alignment research

Cited by 1 page

PageTypeQuality
Center for Human-Compatible AIOrganization37.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20261 KB
Sorry, but the page you were trying to view does not exist.

- [Mailing List](https://humancompatible.ai/#mailinglist)

- [Mission](https://humancompatible.ai/about#mission)
- [Partners](https://humancompatible.ai/about#partners)
- [Privacy Policy](https://humancompatible.ai/privacypolicy/)

- [Faculty](https://humancompatible.ai/people#faculty)
- [Staff](https://humancompatible.ai/people#staff)
- [Researchers](https://humancompatible.ai/people#researchers)
- [Research Fellows](https://humancompatible.ai/people/#research-fellows)
- [Visiting Scholars](https://humancompatible.ai/people#visiting-scholars)
- [Alumni](https://humancompatible.ai/people#alumni)
- [Affiliates](https://humancompatible.ai/people#affiliates)
- [Students](https://humancompatible.ai/people#graduate-students)
- [Affiliated Students](https://humancompatible.ai/people#affiliated-graduate-students)
- [Interns](https://humancompatible.ai/people#interns)
- [Former Interns](https://humancompatible.ai/people#former-interns)

Center for Human-Compatible AI
Resource ID: f83006f689dfcddf | Stable ID: MjljODYwZW