Skip to content
Longterm Wiki
Navigation
Updated 2026-02-09HistoryData
Page StatusResponse
Edited 8 weeks ago7 words2 backlinks
59QualityAdequate •72.5ImportanceHigh25.5ResearchMinimal
Content1/13
SummaryScheduleEntityEdit historyOverview
Tables0/ ~1Diagrams0Int. links0/ ~3Ext. links0/ ~1Footnotes0/ ~2References0/ ~1Quotes0Accuracy0RatingsN:4 R:7 A:4.5 C:7.5Backlinks2
Issues2
QualityRated 59 but structure suggests 13 (overrated by 46 points)
StructureNo tables or diagrams - consider adding visual content

AI Value Learning

Research Area

Value Learning

Training AI systems to infer and adopt human values from observation and interaction

Related
People
Stuart Russell
Organizations
Center for Human-Compatible AIGoogle DeepMind
Risks
Goal Misgeneralization
7 words · 2 backlinks

This page is a stub. Content needed.

Related Wiki Pages

Top Related Pages

Other

Stuart ArmstrongJessica TaylorRLHF

Approaches

AI Alignment

Key Debates

Why Alignment Might Be HardAI Alignment Research Agendas

Analysis

Alignment Robustness Trajectory Model

Risks

Epistemic Sycophancy