Skip to content
Longterm Wiki
Index
Grant·xUgEQ1obCn·Record·Profile

Grant: Berkeley Existential Risk Initiative — AI Standards (2022) (Coefficient Giving → Berkeley Existential Risk Initiative)

Verdictconfirmed95%
1 check · 4/9/2026

Deterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Our claim

entire record
Name
Berkeley Existential Risk Initiative — AI Standards (2022)
Amount
$210,000
Currency
USD
Date
April 2022
Notes
[Navigating Transformative AI] Open Philanthropy recommended a grant of $210,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence. An addiexpand[Navigating Transformative AI] Open Philanthropy recommended a grant of $210,000 to the Berkeley Existential Risk Initiative to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence. An additional grant to the Center for Long-Term Cybersecurity will support related work. This follows our July 2021 support and falls within our focus area of potential risks from advanced artificial intelligence.

Source evidence

1 src · 1 check
confirmed95%deterministic-row-match · 4/9/2026
Name
Berkeley Existential Risk Initiative — AI Standards (2022)
Grantee
Berkeley Existential Risk Initiative
Focus Area
Navigating Transformative AI
Amount
$210,000.00
Date
April 20

NoteDeterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Case № xUgEQ1obCnFiled 4/9/2026Confidence 95%