Skip to content
Longterm Wiki
Index
Grant·qqQrw-w1YA·Record·Profile

Grant: AI Safety Support — MATS Program (November 2023) (Coefficient Giving → AI Safety Support)

Verdictconfirmed95%
1 check · 4/9/2026

Deterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Our claim

entire record
Name
AI Safety Support — MATS Program (November 2023)
Amount
$732,631
Currency
USD
Date
November 2023
Notes
[Global Catastrophic Risks Capacity Building] Open Philanthropy recommended a grant of $732,631 to AI Safety Support to support its collaboration with the Berkeley Existential Risk Initiative on the ML Alignment & Theory Scholars (MATS) Winter 2023-24 Program. The MATS program isexpand[Global Catastrophic Risks Capacity Building] Open Philanthropy recommended a grant of $732,631 to AI Safety Support to support its collaboration with the Berkeley Existential Risk Initiative on the ML Alignment & Theory Scholars (MATS) Winter 2023-24 Program. The MATS program is an educational seminar and independent research program that provides talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, interpretability, and governance. The program also connects participants with the Berkeley AI safety research community. This follows our June 2023 support and falls within our focus area of potential risks from advanced artificial intelligence.

Source evidence

1 src · 1 check
confirmed95%deterministic-row-match · 4/9/2026
Name
AI Safety Support — MATS Program (November 2023)
Grantee
AI Safety Support
Focus Area
Global Catastrophic Risks Capacity Building
Amount
$732,631.00
Date
November 2023

NoteDeterministic match: grantee, amount, date matched in source snapshot (2714 rows)

Case № qqQrw-w1YAFiled 4/9/2026Confidence 95%