All Source Checks
Citation
Model Organisms of Misalignment - Footnote 32
confirmed100% confidence
1 evidence check
Last checked: 4/3/2026
Migrated from citation_quotes. Original verdict: accurate
Evidence — 1 source, 1 check
confirmed100%Haiku 4.5 · 4/3/2026
Found: - **RLHF and training-time mitigations**: <EntityLink id="anthropic">Anthropic</EntityLink>'s March 2025 work tested whether reinforcement learning during training could reduce alignment faking, findi…
Note: Migrated from citation_quotes accuracy check. Original verdict: accurate
Debug info
Record type: citation
Record ID: page:model-organisms-of-misalignment:fn32