Skip to content
Longterm Wiki
All Source Checks
Citation

Samotsvety - Footnote 16

contradicted50% confidence

1 evidence check

Last checked: 4/3/2026

WRONG NUMBERS: The claim includes timelines for transformative AI (28% by 2030, 60% by 2050, 89% by 2100), but the source only mentions 81% chance of Transformative AI (TAI) by 2100. WRONG NUMBERS: The claim states 89% by 2100, but the source states 81% by 2100. UNSUPPORTED: The claim mentions that the group released influential AI risk forecasts throughout 2023-2024, but the source is dated September 9, 2022, and does not mention any activities in 2023-2024. UNSUPPORTED: The claim mentions that their work has been incorporated into literature reviews by Epoch AI and cited in discussions about AI safety policy, but the source does not mention this.

Evidence — 1 source, 1 check

contradicted50%Haiku 4.5 · 4/3/2026
Found: Throughout 2023-2024, the group released influential AI risk forecasts, including timelines for transformative AI (28% by 2030, 60% by 2050, 89% by 2100) and estimates of misaligned AI takeover (25% b

Note: WRONG NUMBERS: The claim includes timelines for transformative AI (28% by 2030, 60% by 2050, 89% by 2100), but the source only mentions 81% chance of Transformative AI (TAI) by 2100. WRONG NUMBERS: The claim states 89% by 2100, but the source states 81% by 2100. UNSUPPORTED: The claim mentions that the group released influential AI risk forecasts throughout 2023-2024, but the source is dated September 9, 2022, and does not mention any activities in 2023-2024. UNSUPPORTED: The claim mentions that their work has been incorporated into literature reviews by Epoch AI and cited in discussions about AI safety policy, but the source does not mention this.

Debug info

Record type: citation

Record ID: page:samotsvety:fn16