Skip to content
Longterm Wiki
Back

Center for AI Safety – Wikipedia.

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Useful background reference on one of the prominent AI safety organizations; helpful for understanding the institutional landscape and the 2023 extinction risk statement that attracted significant media and policy attention.

Metadata

Importance: 55/100wiki pagereference

Summary

Wikipedia's overview of the Center for AI Safety (CAIS), a nonprofit organization focused on reducing societal-scale risks from advanced AI systems. CAIS is known for publishing the 2023 statement on AI extinction risk signed by hundreds of leading AI researchers and for conducting technical safety research. The article covers the organization's founding, mission, key initiatives, and notable figures involved.

Key Points

  • CAIS is a nonprofit founded by Dan Hendrycks focused on reducing catastrophic and existential risks from AI systems.
  • Published the widely-signed 2023 'Statement on AI Risk' warning of extinction-level risks, co-signed by prominent AI researchers including Hinton and Bengio.
  • Conducts and funds technical AI safety research, including work on robustness, evaluation, and safety benchmarks.
  • Offers educational resources including an AI safety course and supports the broader AI safety research community.
  • Represents an institutionalized effort to mainstream AI existential risk concerns within the academic and policy communities.

Cited by 2 pages

PageTypeQuality
Center for AI SafetyOrganization42.0
Dan HendrycksPerson19.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202619 KB
# Center for AI Safety

Center for AI Safety

American AI safety research center

| [![](https://upload.wikimedia.org/wikipedia/en/thumb/f/f2/CenterforAISafetylogo.svg/500px-CenterforAISafetylogo.svg.png)](https://en.wikipedia.org/wiki/File:CenterforAISafetylogo.svg) |
| Formation | 2022;4years ago(2022) |
| Founders | - [Dan Hendrycks](https://en.wikipedia.org/wiki/Dan_Hendrycks "Dan Hendrycks")<br>- Oliver Zhang |
| Headquarters | [San Francisco](https://en.wikipedia.org/wiki/San_Francisco "San Francisco"), [California](https://en.wikipedia.org/wiki/California "California"), US |
| Director | Dan Hendrycks |
| Website | [safe.ai](https://safe.ai/) |

Center for AI Safety

The **Center for AI Safety** ( **CAIS**) is an American nonprofit organization based in [San Francisco](https://en.wikipedia.org/wiki/San_Francisco "San Francisco") that promotes the safe development and deployment of [artificial intelligence](https://en.wikipedia.org/wiki/Artificial_intelligence "Artificial intelligence"). CAIS' work encompasses research in technical [AI safety](https://en.wikipedia.org/wiki/AI_safety "AI safety") and [AI ethics](https://en.wikipedia.org/wiki/AI_ethics "AI ethics"), advocacy, and support to grow the AI safety research field.[\[1\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-1)[\[2\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-2) It was founded in 2022 by [Dan Hendrycks](https://en.wikipedia.org/wiki/Dan_Hendrycks "Dan Hendrycks") and Oliver Zhang.[\[3\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-3)

In May 2023, CAIS published the [statement on AI risk of extinction](https://en.wikipedia.org/wiki/Statement_on_AI_risk_of_extinction "Statement on AI risk of extinction") signed by hundreds of professors of AI, leaders of major AI companies, and other public figures.[\[4\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-4)[\[5\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-5)[\[6\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-6)[\[7\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-7)[\[8\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-8)

## Research

CAIS researchers published "An Overview of Catastrophic AI Risks", which details risk scenarios and risk mitigation strategies. Risks described include the use of AI in [autonomous warfare](https://en.wikipedia.org/wiki/Lethal_autonomous_weapon "Lethal autonomous weapon") or for engineering pandemics, as well as AI capabilities for deception and [hacking](https://en.wikipedia.org/wiki/Cybercrime "Cybercrime").[\[9\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-9)[\[10\]](https://en.wikipedia.org/wiki/Center_for_AI_Safety#cite_note-:0-10) Another work, conducted in collaboration with researchers at [Carnegie Mellon University](https://en.wikipedia.org/wiki/Carnegie_Mellon_University "Carnegie Mellon University"), described an automated way to discover 

... (truncated, 19 KB total)
Resource ID: 0c57ac12fb1e760b | Stable ID: NDkzZGJhYz