Skip to content
Longterm Wiki
All Source Checks
Fact

Jan Leike — Notable For: VP of Alignment Science at Anthropic; former co-lead of OpenAI Superalignment team; prominent advocate for AI safety resource allocation

partial85% confidence

1 evidence check

Last checked: 3/31/2026

The claim has three components. Two are confirmed: his role as co-lead of OpenAI Superalignment (confirmed, though the source says 'co-leaders' not just 'co-lead') and his prominence in AI safety (confirmed via Time 100 AI listings). However, the source does not specify his exact title at Anthropic beyond stating he 'joined Anthropic in May 2024'—it does not confirm the 'VP of Alignment Science' title. Additionally, while the source discusses his AI safety work and his departure from OpenAI over safety concerns, it does not explicitly characterize him as a 'prominent advocate for AI safety resource allocation.' The claim is partially but not fully supported by the source text.

Evidence — 1 source, 1 check

partial85%primaryHaiku 4.5 · 3/31/2026
Found: The source confirms: (1) Leike joined Anthropic in May 2024; (2) He was co-leader (not sole leader) of OpenAI's Superalignment team with Ilya Sutskever starting June 2023; (3) He was featured in Time'

Note: The claim has three components. Two are confirmed: his role as co-lead of OpenAI Superalignment (confirmed, though the source says 'co-leaders' not just 'co-lead') and his prominence in AI safety (confirmed via Time 100 AI listings). However, the source does not specify his exact title at Anthropic beyond stating he 'joined Anthropic in May 2024'—it does not confirm the 'VP of Alignment Science' title. Additionally, while the source discusses his AI safety work and his departure from OpenAI over safety concerns, it does not explicitly characterize him as a 'prominent advocate for AI safety resource allocation.' The claim is partially but not fully supported by the source text.

Debug info

Record type: fact

Record ID: f_jL4fG5hI6j

Source Check: Fact f_jL4fG5hI6j | Longterm Wiki