Skip to content
Longterm Wiki
Index
Grant·iJrSM76L0d·Record·Profile

Grant: Berkeley Existential Risk Initiative — CHAI ML Engineers (Coefficient Giving → Berkeley Existential Risk Initiative)

Verdictconfirmed95%
1 check · 4/3/2026

Deterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Our claim

entire record
Name
Berkeley Existential Risk Initiative — CHAI ML Engineers
Amount
$250,000
Currency
USD
Date
January 2019
Notes
[Navigating Transformative AI] Grant investigator: Daniel Dewey This page was reviewed but not written by the grant investigator. BERI staff also reviewed this page prior to publication. The Open Philanthropy Project recommended a grant of $250,000 to the Berkeley Existential expand[Navigating Transformative AI] Grant investigator: Daniel Dewey This page was reviewed but not written by the grant investigator. BERI staff also reviewed this page prior to publication. The Open Philanthropy Project recommended a grant of $250,000 to the Berkeley Existential Risk Initiative (BERI) to temporarily or permanently hire machine learning research engineers dedicated to BERI's collaboration with the Center for Human-compatible Artificial Intelligence (CHAI). Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support. This grant follows previous support to UC Berkeley to launch CHAI and to BERI to collaborate with CHAI, and falls within our focus area of potential risks from advanced artificial intelligence.

Source evidence

1 src · 1 check
confirmed95%deterministic-row-match · 4/3/2026
Name
Berkeley Existential Risk Initiative — CHAI ML Engineers
Grantee
Berkeley Existential Risk Initiative
Focus Area
Navigating Transformative AI
Amount
$250,000.00
Date
January 20

NoteDeterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Case № iJrSM76L0dFiled 4/3/2026Confidence 95%
Source Check: Grant: Berkeley Existential Risk Initiative — CHAI ML Engineers (Coefficient Giving -> Berkeley Existential Risk Initiative) | Longterm Wiki