Skip to content
Longterm Wiki
All Source Checks
Fact

Future of Life Institute — publication: AI Safety Index published biannually (Summer 2025, Winter 2025). Evaluates 7 leading AI companies on 33 indicators across 6 domains. Winter 2025 finding: no company has adequate guardrails for catastrophic misuse.

partial85% confidence

1 evidence check

Last checked: 3/31/2026

The claim is mostly confirmed but contains discrepancies: (1) The number of companies is 8, not 7 as claimed. (2) The total indicators appear to be 35 (6+7+4+4+4+10), not 33. (3) The biannual publication schedule (Summer 2025, Winter 2025) is not explicitly confirmed in the source text provided. (4) While the source discusses existential safety weaknesses and lack of explicit plans for controlling superintelligence, it does not use the exact phrase 'no company has adequate guardrails for catastrophic misuse.' The core findings align with the spirit of the claim, but specific numerical details do not match precisely.

Evidence — 1 source, 1 check

partial85%primaryHaiku 4.5 · 3/31/2026
Found: The source confirms: (1) FLI publishes the AI Safety Index; (2) Winter 2025 edition exists (December 2025); (3) evaluates 8 leading AI companies (Anthropic, OpenAI, Google DeepMind, xAI, Z.ai, Meta, D

Note: The claim is mostly confirmed but contains discrepancies: (1) The number of companies is 8, not 7 as claimed. (2) The total indicators appear to be 35 (6+7+4+4+4+10), not 33. (3) The biannual publication schedule (Summer 2025, Winter 2025) is not explicitly confirmed in the source text provided. (4) While the source discusses existential safety weaknesses and lack of explicit plans for controlling superintelligence, it does not use the exact phrase 'no company has adequate guardrails for catastrophic misuse.' The core findings align with the spirit of the claim, but specific numerical details do not match precisely.

Debug info

Record type: fact

Record ID: f_EkPWWD2ipv

Source Check: Fact f_EkPWWD2ipv | Longterm Wiki