Skip to content
Longterm Wiki
All Source Checks
Fact

Center for AI Safety — Description: The Center for AI Safety (CAIS) is a San Francisco-based nonprofit focused on reducing societal-scale risks from AI through technical safety research, field-building, and public communication. Founded by Dan Hendrycks and Oliver Zhang. Known for the MMLU benchmark, representation engineering, and the May 2023 "Statement on AI Risk" signed by 350+ AI leaders.

partial85% confidence

1 evidence check

Last checked: 3/31/2026

The source excerpt from the CAIS 'about' page confirms the organization's core mission and purpose regarding AI safety. However, the excerpt provided is limited and does not address the specific factual claims about location, founders, or notable achievements. Since the source URL is the official CAIS about page, these details likely exist elsewhere on that page, but they are not present in the excerpt provided. The claim cannot be fully verified or contradicted based solely on this excerpt, making it a partial confirmation of the mission statement only.

Evidence — 1 source, 1 check

partial85%primaryHaiku 4.5 · 3/31/2026
Found: The source text confirms CAIS exists to ensure safe development and deployment of AI and addresses AI risk as a global priority. However, the provided excerpt does not mention: location (San Francisco

Note: The source excerpt from the CAIS 'about' page confirms the organization's core mission and purpose regarding AI safety. However, the excerpt provided is limited and does not address the specific factual claims about location, founders, or notable achievements. Since the source URL is the official CAIS about page, these details likely exist elsewhere on that page, but they are not present in the excerpt provided. The claim cannot be fully verified or contradicted based solely on this excerpt, making it a partial confirmation of the mission statement only.

Debug info

Record type: fact

Record ID: f_1BWsBJuBcg

Source Check: Fact f_1BWsBJuBcg | Longterm Wiki