Jan Leike — Notable For: VP of Alignment Science at Anthropic; former co-lead of OpenAI Superalignment team; prominent advocate for AI safety resource allocation
1 evidence check
Last checked: 3/31/2026
The claim has three components. Two are confirmed: his role as co-lead of OpenAI Superalignment (confirmed, though the source says 'co-leaders' not just 'co-lead') and his prominence in AI safety (confirmed via Time 100 AI listings). However, the source does not specify his exact title at Anthropic beyond stating he 'joined Anthropic in May 2024'—it does not confirm the 'VP of Alignment Science' title. Additionally, while the source discusses his AI safety work and his departure from OpenAI over safety concerns, it does not explicitly characterize him as a 'prominent advocate for AI safety resource allocation.' The claim is partially but not fully supported by the source text.
Evidence — 1 source, 1 check
Note: The claim has three components. Two are confirmed: his role as co-lead of OpenAI Superalignment (confirmed, though the source says 'co-leaders' not just 'co-lead') and his prominence in AI safety (confirmed via Time 100 AI listings). However, the source does not specify his exact title at Anthropic beyond stating he 'joined Anthropic in May 2024'—it does not confirm the 'VP of Alignment Science' title. Additionally, while the source discusses his AI safety work and his departure from OpenAI over safety concerns, it does not explicitly characterize him as a 'prominent advocate for AI safety resource allocation.' The claim is partially but not fully supported by the source text.
Debug info
Record type: fact
Record ID: f_jL4fG5hI6j