Skip to content
Longterm Wiki
DH

Dan Hendrycks

AI safety research; benchmark creation; CAIS leadership; catastrophic risk focus

Expert Positions1 topic

TopicViewEstimateConfidenceDateSource
P(AI Catastrophe)SignificantComparable to pandemics and nuclear warhighMay 2023CAIS Statement on AI Risk (2023)

Policy Positions1 policy

PolicyPositionStatusReason
Safe and Secure Innovation for Frontier Artificial Intelligence Models ActsupportvetoedCAIS director who helped draft the original bill; called its passage 'a landmark moment for AI safety'; organized AI safety research community support and stated the bill was 'in the long-term interest of industry' because 'a major safety incident would likely be the biggest roadblock to further advancement'

Organization Roles1

Founder (via parent organization CAIS)
2023 – present

Board Seats1

xAICurrent
Advisor
2023 – present

Education

University of California, Berkeley

From wiki articleRead full article →

Quick Assessment

DimensionAssessment
Primary RoleExecutive Director, Center for AI Safety (CAIS); AI safety researcher
Key ContributionsDeveloped MMLU and ETHICS benchmarks for evaluating language models; proposed the GELU activation function (adopted in BERT and GPT-4 series); foundational work on out-of-distribution detection; co-authored papers on robustness and ML safety; coordinated the May 2023 statement on AI extinction risk
Key PublicationsA Baseline for Detecting Misclassified and Out-of-Distribution Examples (ICLR 2017); Gaussian Error Linear Units (GELUs) (arXiv 2016); Measuring Massive Multitask Language Understanding (ICLR 2021); Aligning AI With Shared Human Values (ICLR 2021); Natural Adversarial Examples (CVPR 2021); Unsolved Problems in ML Safety (arXiv 2021); Introduction to AI Safety, Ethics, and Society (CRC Press, 2024); Superintelligence Strategy (arXiv 2025)
Institutional AffiliationCenter for AI Safety (CAIS), San Francisco; advisor to xAI and Scale AI
EducationB.S. with Honors, Computer Science, University of Chicago (2018); Ph.D., Computer Science, UC Berkeley (2022)
Influence on AI SafetyCAIS produces safety research, educational resources, and policy advocacy; Hendrycks co-authored NIST AI Risk Management Framework input (2022) and co-authored Superintelligence Strategy (2025) with Eric Schmidt and Alexandr Wang

Overview

Dan Hendrycks (born 1994 or 1995) is a computer scientist and AI safety researcher who serves as executive director of the Center for AI Safety (CAIS), a San Francisco-based nonprofit he co-founded in 2022 with Oliver Zhang. During his doctoral research at UC Berkeley — advised by Jacob Steinhardt and Dawn Song — he developed several benchmarks that became widely used reference points for evaluating large language models, including MMLU and the ETHICS dataset, both published at ICLR 2021. His dissertation, titled Machine Learning Safety, was completed in 2022.

Prior to his benchmark work, Hendrycks co-authored two papers that became foundational in the deep learning literature: a 2016 arXiv preprint proposing the GELU activation function (later adopted in BERT, GPT-2, and subsequent transformer architectures), and a 2017 ICLR paper establishing a simple baseline for out-of-distribution detection using maximum softmax probabilities, which accumulated over 3,800 citations on Semantic Scholar and is regarded as a foundational reference in the OOD detection literature.

Through CAIS, Hendrycks has combined continued technical research with field-building and policy engagement. In May 2023 he coordinated a public statement asserting that AI extinction risk should be treated as a global priority, which drew over 350 initial signatories — a count that grew to more than 500 as the page remained open — including Turing Award winners and executives from major AI laboratories. In 2024 he published an open-access textbook, Introduction to AI Safety, Ethics, and Society, through CRC Press (Taylor & Francis). In March 2025 he co-authored Superintelligence Strategy with former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang.

Links

References

Facts

6
People
Role / TitleDirector
Biographical
Notable ForAI safety research; benchmark creation; CAIS leadership; catastrophic risk focus
EducationUniversity of California, Berkeley
Wikipediahttps://en.wikipedia.org/wiki/Dan_Hendrycks
General
Websitehttps://hendrycks.com