Skip to content
Longterm Wiki

AI Safety Grantmaking

Open
Grant Round
Funder Organization
Division
Navigating Transformative AI
Timeline
Deadline: Rolling
Application
Source
Description

Coefficient Giving's ongoing AI safety grantmaking, covering technical alignment research, governance, and field-building

Notes

Largest funder of AI safety research by total dollars committed. ~$63.6M in 2024.

Grants Awarded

333Total: $543M
GrantRecipientAmountDateStatus
Georgetown University — Center for Security and Emerging TechnologyGeorgetown CSET$55MJan 2019
Center for Security and Emerging Technology — General Support (August 2021)Georgetown CSET$39MAug 2021
OpenAI — General SupportOpenAI$30MMar 2017
FAR.AI — AI Safety Research and Field-buildingFAR AI$29MSep 2025
Massachusetts Institute of Technology — AI Trends and Impacts Research (2022)Massachusetts Institute of Technology$13MMar 2022
FAR AI — AI Safety RegrantingFAR AI$12MJun 2024
UC Berkeley — Center for Human-Compatible Artificial Intelligence (2021)University of California, Berkeley$12MJan 2021
Institute for AI Policy and Strategy — General Support (2025)Institute for AI Policy and Strategy$12MApr 2025
OpenMined Foundation — Secure Enclaves for LLM EvaluationOpenMined$11MJun 2025
Redwood Research — General Support (2022)Redwood Research$11MAug 2022
RAND Corporation — Emerging Technology InitiativesRAND$11MOct 2023
RAND — AI Evaluation and TestingRAND$10MSep 2025
Redwood Research — General Support (2021)Redwood Research$9.4MNov 2021
Center for a New American Security — AI Security and Stability ProgramCenter for a New American Security$8.3MJun 2025
Center for Security and Emerging Technology — General Support (January 2021)Georgetown CSET$8MJan 2021
Machine Intelligence Research Institute — General Support (2020)Machine Intelligence Research Institute$7.7MFeb 2020
Epoch — General Support (2023)Epoch AI$6.9MApr 2023
OpenMined — Software for AI AuditsOpenMined$6MSep 2023
RAND Corporation — Technology and Security Policy CenterRAND$6MAug 2024
UC Berkeley — Center for Human-Compatible AI (2016)University of California, Berkeley$5.6MAug 2016
RAND Corporation — Emerging Technology Fellowships and ResearchRAND$5.5MApr 2023
University of Oxford — Oxford AI Governance InitiativeUniversity of Oxford$5.4MMar 2025
Redwood Research — General Support (2023)Redwood Research$5.3MJun 2023
Center for AI Safety — General Support (2022)Center for AI Safety$5.2MNov 2022
Center for a New American Security — Work on AI GovernanceCenter for a New American Security$5.1MJul 2022
National Science Foundation — Safe Learning-Enabled SystemsNational Science Foundation$5MApr 2023
RAND Corporation — Emerging Technology Initiatives (2024)RAND$5MJul 2024
Center for AI Safety — General Support (2023)Center for AI Safety$4MApr 2023
Machine Intelligence Research Institute — General Support (2017)Machine Intelligence Research Institute$3.8MOct 2017
Massachusetts Institute of Technology — AI Trends and Impacts Research (2025)Massachusetts Institute of Technology$3.6MJan 2025
UC Berkeley — Cyberoffense BenchmarkUniversity of California, Berkeley$3.4MJun 2025
Institute for AI Policy and Strategy — General Support (January 2024)Institute for AI Policy and Strategy$3MJan 2024
Center for Responsible Innovation — General SupportCenter For Responsible Innovation$3MAug 2024
Americans for Responsible Innovation — General SupportAmericans for Responsible Innovation$3MAug 2024
Centre for the Governance of AI — General Support (2023)GovAI$3MMay 2023
FutureHouse — Benchmarks for Biology Research and DevelopmentFutureHouse$2.9MMar 2024
Stanford University — LLM Cybersecurity BenchmarkStanford University$2.9MJul 2024
Massachusetts Institute of Technology — AI Trends and Impacts Research (2023)Massachusetts Institute of Technology$2.9MJul 2023
GovAI — General Support (June 2025)GovAI$2.8MJun 2025
Rethink Priorities — AI Governance Research (2022)Rethink Priorities$2.7MMar 2022
Hebrew University of Jerusalem — Governance of AI LabHebrew University Of Jerusalem$2.7MJun 2025
Machine Intelligence Research Institute — General Support (2019)Machine Intelligence Research Institute$2.7MFeb 2019
Eleuther AI — Interpretability ResearchEleuther AI$2.6MNov 2023
UC Berkeley — Compute Resources for AI Safety ResearchUniversity of California, Berkeley$2.6MMay 2025
Centre for the Governance of AI — AI Field BuildingGovAI$2.5MDec 2021
GovAI — General Support (2024)GovAI$2.5MOct 2024
RAND Corporation — Emerging Technology Fellowships and Research (2024)RAND$2.5MApr 2024
RAND Corporation — Emerging Technology Initiatives (2025)RAND$2.5MMay 2025
Montreal Institute for Learning Algorithms — AI Safety ResearchMontreal Institute For Learning Algorithms$2.4MJul 2017
Open Phil AI Fellowship — 2019 ClassCoefficient Giving$2.3MMay 2019
Open Phil AI Fellowship — 2020 ClassCoefficient Giving$2.3MMay 2020
Apollo Research — General SupportApollo Research$2.2MMay 2024
Palisade Research — General Support (2025)Palisade Research$2.1MMay 2025
New York University — LLM Cybersecurity BenchmarkNew York University$2.1MJul 2024
Berkeley Existential Risk Initiative — Machine Learning Alignment Theory ScholarsBerkeley Existential Risk Initiative$2MNov 2022
Wilson Center — AI Policy Training Program (2022)The Wilson Center$2MJan 2022
Mila — AI Safety ResearchMila - Quebec AI Institute$2MMay 2024
Future of Humanity Institute — General SupportFuture of Humanity Institute$2MMar 2017
Epoch — General Support (2022)Epoch AI$2MJun 2022
Center for AI Safety — Exit Grant (October 2023)Center for AI Safety$1.9MOct 2023
Open Phil AI Fellowship — 2022 ClassCoefficient Giving$1.8MApr 2022
University of Maryland — LLM Cybersecurity BenchmarkUniversity of Maryland$1.7MSep 2024
Center for Open Science — LLM Research BenchmarkCenter For Open Science$1.7MJul 2024
Palisade Research — General Support (2024)Palisade Research$1.7MJun 2024
The University of Texas at Austin — Research on AI Safety and Computational Complexity TheoryUniversity Of Texas At Austin$1.6MMay 2025
SeedAI — General SupportSeedAI$1.6MMar 2025
Ought — General Support (2020)Elicit (AI Research Tool)$1.6MJan 2020
AI Safety Support — SERI MATS ProgramAI Safety Support$1.5MNov 2022
UCLA School of Law — AI GovernanceUCLA School Of Law$1.5MMay 2017
Apollo Research — Startup FundingApollo Research$1.5MJun 2023
Stanford University — AI Alignment Research (2021)Stanford University$1.5MNov 2021
SeedAI — AI Policy Development and Related EventsSeedAI$1.5MApr 2024
Talos Network — AI Governance Field-BuildingTalos Network$1.5MApr 2025
Simon Institute for Longterm Governance — General SupportSimon Institute for Longterm Governance$1.5MApr 2024
UC Berkeley — AI Safety ResearchUniversity of California, Berkeley$1.5MOct 2017
Hofvarpnir Studios — Compute Cluster for AI Safety ResearchHofvarpnir Studios$1.4MMar 2022
Center for AI Safety — Philosophy Fellowship and NeurIPS PrizesCenter for AI Safety$1.4MFeb 2023
Massachusetts Institute of Technology — Adversarial Robustness ResearchMassachusetts Institute of Technology$1.4MFeb 2021
Stanford University — Support for Percy LiangStanford University$1.3MMay 2017
Alignment Research Center — General Support (November 2022)Alignment Research Center$1.3MNov 2022
Open Phil AI Fellowship — 2018 ClassCoefficient Giving$1.2MMay 2018
AI Safety Support — SERI MATS 4.0AI Safety Support$1.2MJun 2023
Future of Life Institute — Artificial Intelligence Risk ReductionFuture of Life Institute$1.2MAug 2015
Owain Evans Research Group — AI Evaluations ResearchEffective Ventures Foundation USA$1.2MMay 2023
Safe AI Forum — Operating ExpensesSafe AI Forum$1.2MDec 2023
UC Berkeley — AI Safety Research (2018)University of California, Berkeley$1.1MNov 2018
Berkeley Existential Risk Initiative — CHAI Collaboration (2022)Berkeley Existential Risk Initiative$1.1MFeb 2022
UC Berkeley — AI Safety Research (2019)University of California, Berkeley$1.1MDec 2019
Redwood Research — AI Safety Research CollaborationsRedwood Research$1.1MMay 2025
Northeastern University — Large Language Model Interpretability Research (2024)Northeastern University$1.1MMay 2024
Forecasting Research Institute — AI Progress Forecasting PanelForecasting Research Institute (FRI)$1.1MOct 2024
Algorithmic Research Group — Language Model Capabilities Benchmarking (2024)Algorithmic Research Group$1.1MAug 2024
Berkeley Existential Risk Initiative — CHAI InternshipsBerkeley Existential Risk Initiative$1.1MJan 2024
Princeton University — Software Engineering LLM BenchmarkPrinceton University$1MMay 2024
Berkeley Existential Risk Initiative — SERI MATS Program (2022)Berkeley Existential Risk Initiative$1MApr 2022
Open Phil AI Fellowship — 2021 ClassCoefficient Giving$1MApr 2021
GovAI — General Support (May 2025)GovAI$1MMay 2025
Harvard University — AI Interpretability, Controllability, and Safety ResearchHarvard University$1MJan 2024
Ought — General Support (2019)Elicit (AI Research Tool)$1MNov 2019
Stanford University — LLM-Generated Research Ideation BenchmarkStanford University$880KMay 2024
Princeton University — AI R&D BenchmarkPrinceton University$863KSep 2024
The Vista Institute for AI Policy — AI Law Field BuildingThe Vista Institute For AI Policy$846KOct 2024
Friedrich Schiller University Jena — Analytical Chemistry BenchmarkFriedrich Schiller University Jena$829KOct 2024
Institute for AI Policy and Strategy — General Support (April 2024)Institute for AI Policy and Strategy$828KApr 2024
Tarbell Center for AI Journalism — General SupportTarbell$816KNov 2024
University of Illinois Foundation — LLM Hacking BenchmarksUniversity Of Illinois Urbana Champaign$800KJan 2024
UC Berkeley — InterACT LabUniversity of California, Berkeley$775KFeb 2025
Longview Philanthropy — AI Policy Development at the OECDLongview Philanthropy$770KFeb 2023
Trustees of Boston University — LLM Research BenchmarkBoston University$756KJul 2024
Guide Labs — Open Access Interpretability ProjectGuide Labs$750KAug 2023
Stanford University — AI Interpretability ResearchStanford University$744KApr 2025
University of California, Berkeley — Software Engineering BenchmarkUniversity of California, Berkeley$740KAug 2024
Practical AI Alignment and Interpretability Research Group — Interpretability WorkPractical AI Alignment And Interpretability Research Group$737KSep 2024
University of Washington — Adversarial Robustness ResearchUniversity of Washington$730KOct 2021
Berkeley Existential Risk Initiative — AI Risk Management Frameworks (2024)Berkeley Existential Risk Initiative$713KFeb 2024
AI Safety Support — MATS Program ExtensionAI Safety Support$710KOct 2024
Berkeley Existential Risk Initiative — CHAI Collaboration (2019)Berkeley Existential Risk Initiative$705KNov 2019
University of Toronto — AI Safety and Alignment InitiativesUniversity of Toronto$700KJun 2025
FAR.AI — AI Alignment Research Projects (July 2024)FAR AI$680KJul 2024
Talos Network — General SupportTalos Network$662KJul 2024
Langsikt — AI Safety AdvocacyLangsikt$660KOct 2024
FAR.AI — AI Alignment Research Projects (January 2024)FAR AI$646KJan 2024
Tony Blair Institute for Global Change — AI Governance InitiativesTony Blair Institute For Global Change$636KNov 2024
FAR.AI — General Support (2022)FAR AI$625KDec 2022
University of Oxford — LLM Research ReplicationUniversity of Oxford$622KSep 2024
AI Impacts — General Support (2023)AI Impacts$620KAug 2023
AI Safety Support – Astra FellowshipAI Safety Support$617KJan 2024
Rethink Priorities — AI Governance Research (2021)Rethink Priorities$612KJul 2021
FutureSearch – Benchmark for Language Model ForecastingFutureSearch$607KMar 2024
GovAI — General Support (November 2024)GovAI$600KNov 2024
Carnegie Endowment for International Peace — AI Governance Research (2022)Carnegie Endowment for International Peace$598KMar 2022
Yale University — LLM Persuasiveness EvaluationYale University$596KJun 2024
Study and Training Related to AI Policy Careers — Scholarship SupportStudy And Training Related To AI Policy Careers$594KMar 2020
University of Tübingen — Robustness Research (Wieland Brendel)University Of TüBingen$590KFeb 2021
Carnegie Mellon University — Robust AI Unlearning TechniquesCarnegie Mellon University$584KMay 2025
Good Ancestors Policy — Global Catastrophic Risks AdvocacyGood Ancestors Policy$580KDec 2024
University of Tuebingen — Adversarial Robustness ResearchUniversity Of Tuebingen$575KFeb 2023
Northeastern University — Large Language Model Interpretability ResearchNortheastern University$562KNov 2022
Foundation for American Innovation — AI Safety Policy AdvocacyFoundation for American Innovation$553KJun 2023
Massachusetts Institute of Technology — AI Trends and Impacts Research (2020)Massachusetts Institute of Technology$551KNov 2020
Sage — AI ExplainersSage$550KMay 2024
Carnegie Mellon University — Benchmark for Web-Based TasksCarnegie Mellon University$547KMar 2024
WestExec — Report on Assurance in Machine Learning SystemsWestExec$540KFeb 2020
Metaculus — Forecasting TournamentsMetaculus$532KMay 2024
Ought — General Support (2018)Elicit (AI Research Tool)$525KMay 2018
University of Toronto — Machine Learning ResearchUniversity of Toronto$520KDec 2020
Cambridge in America — Data Science BenchmarkCambridge In America$518KJul 2024
Speculative Technologies — Leadership Training ProgramSpeculative Technologies$500KMar 2025
Machine Intelligence Research Institute — General Support (2016)Machine Intelligence Research Institute$500KAug 2016
UC Berkeley — Study on Frontier Model BehaviorUniversity of California, Berkeley$500KJun 2025
Wilson Center — AI Policy Seminar Series (June 2020)The Wilson Center$497KJun 2020
Michigan State University — Robust AI Unlearning TechniquesMichigan State University$484KMay 2025
University of California, San Diego — AI Persuasiveness EvaluationUniversity Of California, San Diego$471KApr 2024
FAR.AI — Language Model Misalignment (2022)FAR AI$464KAug 2022
Training for Good — Operating Costs and EU Tech Policy FellowshipTraining for Good$461KJan 2025
FAR.AI — General Support (2023)FAR AI$460KJul 2023
Conjecture — SERI MATS Program in London (2022)Unknown$457KOct 2022
Centre for the Governance of AI — General Support (2020)GovAI$450KMay 2020
Stiftung Neue Verantwortung — AI Policy AnalysisStiftung Neue Verantwortung$444KMar 2022
Carnegie Endowment for International Peace — AI Governance Research (2025)Carnegie Endowment for International Peace$444KApr 2025
AI Safety Support — Situational Awareness ResearchAI Safety Support$444KApr 2023
University of Oxford — Research on the Global Politics of AIUniversity of Oxford$430KJul 2018
Berkeley Existential Risk Initiative — SERI MATS 4.0Berkeley Existential Risk Initiative$429KJun 2023
FAR.AI — Language Model MisalignmentFAR AI,Language Model Safety Fund$426KOct 2021
Foundation for American Innovation — Exit GrantFoundation for American Innovation$424KJul 2024
Modulo Research — AI Safety ResearchModulo Research$408KAug 2023
Berkeley Existential Risk Initiative — Core Support and CHAI CollaborationBerkeley Existential Risk Initiative$404KJul 2017
Straumli — LLM Cyberoffense BenchmarkStraumli$400KFeb 2024
GovAI — General Support (November 2024)GovAI$400KNov 2024
Wilson Center — AI Policy Seminar SeriesThe Wilson Center$400KJul 2018
Matthew Kenney – Language Model Capabilities BenchmarkingMatthew Kenney$397KSep 2023
Meridian — Research on Emergent MisalignmentMeridian$396KJun 2025
Stanford University — AI Standards ReportStanford University$388KMar 2025
Wilson Center — AI Policy Seminar Series (February 2020)The Wilson Center$368KFeb 2020
Center for International Security and Cooperation — AI and Strategic StabilityStanford University$365KJul 2021
AI Impacts — General SupportAI Impacts$365KJun 2022
Forecasting Research Institute — Tripwire Capability EvaluationsForecasting Research Institute (FRI)$359KOct 2024
University of Maryland — Research on Neural Network GeneralizationUniversity of Maryland$350KAug 2023
AI Impacts — Expert Survey on Progress in AIAI Impacts$345KAug 2023
Carnegie Mellon University — Research on Adversarial ExamplesCarnegie Mellon University$343KJul 2022
Cornell University — AI Safety ResearchCornell University$343KFeb 2023
Stanford University — Adversarial Robustness Research (Dimitris Tsipras)Stanford University$331KAug 2021
Stanford University — Adversarial Robustness Research (Shibani Santurkar)Stanford University$331KAug 2021
UC Berkeley — Adversarial Robustness Research (David Wagner)University of California, Berkeley$330KFeb 2021
UC Berkeley — Adversarial Robustness Research (Dawn Song)University of California, Berkeley$330KFeb 2021
Carnegie Mellon University — Adversarial Robustness ResearchCarnegie Mellon University$330KMay 2021
University of Southern California — Adversarial Robustness ResearchUniversity of Southern California$320KAug 2021
University of Maryland — Policy Fellowship (2023)University of Maryland$313KApr 2023
National Academies of Sciences, Engineering, and Medicine — Safety-Critical Machine LearningNational Academies Of Sciences, Engineering, And Medicine$309KFeb 2022
Rethink Priorities — AI Governance WorkshopRethink Priorities$302KApr 2023
Centre for International Governance Innovation — Global AI Risks InitiativeCentre For International Governance Innovation$300KJul 2024
Berkeley Existential Risk Initiative — AI Standards (2021)Berkeley Existential Risk Initiative$300KJul 2021
University of Tübingen — Adversarial Robustness Research (Matthias Hein)University Of TüBingen$300KFeb 2021
Yale University — Research on the Global Politics of AIYale University$299KJul 2017
Wilson Center — AI Policy Training ProgramThe Wilson Center$291KApr 2021
Effective Ventures Foundation — AI Safety Communications CentreCentre for Effective Altruism$288KAug 2023
FAR.AI — FAR Labs Office SpaceFAR AI$280KMar 2023
George Mason University — Research into Future Artificial Intelligence ScenariosGeorge Mason University$277KJun 2016
Carnegie Mellon University — LLM Use Case DatabaseCarnegie Mellon University$267KMay 2024
Daniel Kang — LLM Hacking BenchmarksDaniel Kang$265KApr 2025
Alignment Research Center — General SupportAlignment Research Center$265KMar 2022
UC Santa Cruz — Adversarial Robustness ResearchUniversity Of California, Santa Cruz$265KJan 2021
University of Oxford — AI Cybersecurity ProjectUniversity of Oxford$265KNov 2024
Stanford University — Language Model EvaluationsStanford University$250KApr 2023
Centre for Effective Altruism — Harvard AI Safety OfficeCentre for Effective Altruism$250KAug 2022
University of Chicago — Research on Complementary AIUniversity of Chicago$250KMay 2023
Berkeley Existential Risk Initiative — CHAI ML EngineersBerkeley Existential Risk Initiative$250KJan 2019
University of Cambridge — Machine Learning ResearchUniversity of Cambridge$250KApr 2021
Conjecture — SERI MATS (2023)Unknown$245KApr 2023
Meridian — Avoiding Encoded Reasoning in LLMsMeridian$245KJun 2025
Georgetown University — Policy Fellowship (2022)Georgetown University$239KDec 2022
Mila — Research Project on Artificial IntelligenceMontreal Institute For Learning Algorithms$238KNov 2021
London Initiative for Safe AI (LISA) — General SupportLondon Initiative for Safe AI$237KNov 2023
Leap Labs — Interpretability ResearchLeap Labs$230KApr 2023
Conjecture — AI Safety Technical ProgramUnknown$224KMay 2023
Conjecture — Cybersecurity BootcampUnknown$223KJun 2025
Lee Foster — LLM Misuse DatabaseLee Foster$223KJan 2024
University of Maryland — Study on Encoded Reasoning in LLMsUniversity of Maryland$218KJun 2025
Observatorio de Riesgos Catastróficos Globales — AI Risk ExplorerObservatorio De Riesgos CatastróFicos Globales$213KOct 2024
Université de Montréal — Research Project on Artificial IntelligenceUniversité De MontréAl$211KSep 2021
Berkeley Existential Risk Initiative — AI Standards (2022)Berkeley Existential Risk Initiative$210KApr 2022
Berkeley Existential Risk Initiative – SERI Summer FellowshipsBerkeley Existential Risk Initiative$210KMar 2021
AI Safety Hub — Startup CostsAI Safety Hub$204KSep 2022
AI Standards Lab — AI Standards and Risk Management FrameworksAI Standards Lab$200KMay 2024
Arizona State University — Adversarial Robustness ResearchArizona State University$200KAug 2022
Center for Long-Term Cybersecurity — AI Risk Management Frameworks (2024)Center For Long Term Cybersecurity$200KFeb 2024
UC Berkeley — Center for Human-Compatible AI (2019)University of California, Berkeley$200KNov 2019
Electronic Frontier Foundation — Artificial Intelligence Scenarios and Social ImpactsElectronic Frontier Foundation$199KNov 2016
Berkeley Existential Risk Initiative — SERI MATS ProgramBerkeley Existential Risk Initiative$195KNov 2021
Kairos — General SupportKairos Project$195KJun 2025
Epoch — AI Worldview InvestigationsEpoch AI$189KFeb 2023
Institute for Replication — Large Language Model Replication GamesInstitute For Replication$174KFeb 2024
Purdue University — Language Model ResearchPurdue University$170KDec 2022
FAR.AI — Alignment WorkshopFAR AI$167KSep 2023
AI Scholarships — Scholarship Support (2018)AI Scholarships$159KFeb 2018
Rethink Priorities — AI Governance Research (2023)Rethink Priorities$155KApr 2023
Stanford University — AI Alignment Research (Barrett and Viteri)Stanford University$154KJul 2022
Berryville Institute of Machine Learning — Machine Learning Security ResearchBerryville Institute Of Machine Learning$150KJan 2021
Berkeley Existential Risk Initiative — General SupportBerkeley Existential Risk Initiative$150KJan 2020
Machine Intelligence Research Institute — AI Safety Retraining ProgramMachine Intelligence Research Institute$150KJun 2018
AI Safety Support — AI Safety Technical Program (May 2023)AI Safety Support$146KMay 2023
Berkeley Existential Risk Initiative — David Krueger CollaborationBerkeley Existential Risk Initiative$140KApr 2022
University of Utah — AI Alignment ResearchUniversity of Utah$140KApr 2023
Effective Altruism Israel — Information Security Talent DevelopmentEffective Altruism Israel$139KMay 2024
UC Santa Barbara — LLM Use Case DatabaseSanta Barbara$133KJun 2024
ETH Zürich — LLM Adversarial Attacks BenchmarkETH ZüRich$130KSep 2024
Surge AI — Data Production for AI Safety ResearchSurge AI$126KSep 2023
Forecasting Research Institute — Red-line EvaluationsForecasting Research Institute (FRI)$125KApr 2024
Stanford University — AI Economic Impacts WorkshopStanford University$120KNov 2023
Center for Strategic and International Studies — AI Accident Risk and Technology CompetitionCenter For Strategic And International Studies$118KSep 2020
Center for a New American Security — AI and Security ProjectsCenter for a New American Security$117KOct 2020
Rethink Priorities — Research on LLM UseRethink Priorities$116KApr 2024
UC Santa Cruz — Adversarial Robustness Research (2023)University Of California, Santa Cruz$114KJan 2023
Observatorio de Riesgos Catastróficos Globales — AI Policy WorkObservatorio De Riesgos CatastróFicos Globales$110KMar 2024
University of Pennsylvania — AI Governance RoundtablesUniversity of Pennsylvania$110KSep 2023
Center for a New American Security — Risks from Militarized AICenter for a New American Security$101KSep 2021
University of British Columbia — AI Alignment ResearchUniversity Of British Columbia$100KJan 2023
Responsible AI Collaborative — AI Incident DatabaseResponsible AI Collaborative$100KFeb 2023
FAR.AI — AI Interpretability ResearchFAR AI$100KMar 2023
University of Michigan — Scalable Oversight ResearchUniversity of Michigan$100KFeb 2024
Berkeley Existential Risk Initiative — General Support (2022)Berkeley Existential Risk Initiative$100KNov 2022
UC Berkeley — AI Red-teaming BootcampUniversity of California, Berkeley$100KJan 2025
Forecasting Research Institute – Forecasting BenchmarkForecasting Research Institute (FRI)$100KApr 2024
University of Wisconsin–Madison — Scalable Oversight ResearchUniversity Of Wisconsin–Madison$100KApr 2024
Princeton University — Scalable Oversight ResearchPrinceton University$100KApr 2024
Stanford University — Machine Learning Security Research Led by Dan Boneh and Florian TramerStanford University$100KJul 2018
AI Impacts — General Support (2018)AI Impacts$100KJun 2018
Alignment Research Engineer Accelerator — AI Safety Technical Program (2023)Alignment Research Engineer Accelerator$98KNov 2023
Johns Hopkins University — Course BuyoutsJohns Hopkins University$95KApr 2025
Faculty AI — Wargame for AI DocumentaryFaculty AI$93KMay 2024
Apart Research — AI Alignment Hackathons (2022)Apart Research$89KDec 2022
UC Berkeley — Adversarial Robustness Research (Aditi Raghunathan)University of California, Berkeley$88KAug 2021
University of Illinois — AI Alignment ResearchUniversity Of Illinois$80KMar 2023
University of Toronto — Alignment ResearchUniversity of Toronto$80KJan 2023
Stanford University — AI IndexStanford University$78KSep 2021
AI Alignment Awards — Shutdown Problem ContestAI Alignment Awards$75KSep 2022
Legal Priorities Project — Law & AI Summer Research FellowshipLegal Priorities Project$75KAug 2023
University of Minnesota — Legal Automation BenchmarkUniversity Of Minnesota, Twin Cities$74KJun 2024
Northeastern University — Mechanistic Interpretability ResearchNortheastern University$72KSep 2023
Berkeley Existential Risk Initiative — MineRL BASALT CompetitionBerkeley Existential Risk Initiative$70KJul 2021
Applied Research Laboratory for Intelligence and Security — Report on Security ClearancesApplied Research Laboratory For Intelligence And Security$70KJul 2021
Berkeley Existential Risk Initiative — University Collaboration ProgramBerkeley Existential Risk Initiative$70KOct 2023
Berkeley Existential Risk Initiative — Scalable Oversight DatasetBerkeley Existential Risk Initiative$70KSep 2023
Jennifer Lin — LLM Model-Based Planning ReportJennifer Lin$70KJul 2024
Neel Nanda — Interpretability ResearchNeel Nanda$70KJan 2023
Center for International Security and Cooperation — AI Accident Risk and Technology CompetitionStanford University$67KSep 2020
Berkeley Existential Risk Initiative — InterACT LabBerkeley Existential Risk Initiative$59KJan 2025
Berkeley Existential Risk Initiative — AI Governance WorkshopBerkeley Existential Risk Initiative$57KMay 2025
Johns Hopkins University — Support for Jared Kaplan and Brice MénardJohns Hopkins University$55KMar 2020
Ali Merali — Economics Research on AI Model Scaling EffectsAli Merali$55KJul 2024
Harmony Intelligence — LLM Moneymaking BenchmarkHarmony Intelligence$54KJun 2024
Swiss AI Safety Summer Camp — AI Safety BootcampSwiss AI Safety Summer Camp$51KAug 2023
Centre for the Governance of AI — Compute Strategy WorkshopGovAI$51KSep 2022
FAR.AI — Interpretability ResearchFAR AI$50KDec 2022
Mila — Workshop on Human-Level AIMila - Quebec AI Institute$50KMay 2023
AI Impacts — General Support (2020)AI Impacts$50KNov 2020
World Economic Forum — Global AI Council WorkshopWorld Economic Forum$50KApr 2020
FAR.AI — Inverse Scaling PrizeFAR AI$50KDec 2022
AI Safety Hub — Safety LabsAI Safety Hub$47KNov 2022
Berkeley Existential Risk Initiative — Latent Adversarial Training ProjectBerkeley Existential Risk Initiative$45KMar 2024
AI Safety Support — Research on Trends in Machine LearningAI Safety Support$42KApr 2022
Berkeley Existential Risk Initiative — Language Model Alignment ResearchBerkeley Existential Risk Initiative$40KJun 2022
San José State University — AI ResearchCalifornia State University, San José$39KMar 2023
Berkeley Existential Risk Initiative — Oxford Martin AI Governance InitiativeBerkeley Existential Risk Initiative$35KOct 2024
Berkeley Existential Risk Initiative — Lab RetreatBerkeley Existential Risk Initiative$35KJul 2023
AI Impacts — General Support (2016)AI Impacts$32KDec 2016
Virtue AI — NeurIPS PrizesVirtue AI$30KOct 2024
Berkeley Existential Risk Initiative — Algorithmic Alignment GroupBerkeley Existential Risk Initiative$30KSep 2024
OpenMined — Research on Privacy-Enhancing Technologies and AI SafetyOpenMined$28KApr 2022
Rutgers University — AI Takeoff Speeds PaperRutgers University$27KOct 2024
University of California, Berkeley — AI Alignment WorkshopUniversity of California, Berkeley$26KOct 2023
ETH Zurich Foundation (USA) — Machine Learning Research SupportETH Zurich Fondation (USA)$25KNov 2023
Center for Long-Term Cybersecurity — AI Standards (2021)University of California, Berkeley$25KJul 2021
Distill Prize for Clarity in Machine Learning — General SupportDistill Prize For Clarity In Machine Learning$25KMar 2017
Stanford University — Percy Liang Planning GrantStanford University$25KMar 2017
Rice, Hadley, Gates and Manuel LLC — AI Accident Risk and Technology CompetitionRice, Hadley, Gates &Amp; Manuel LLC$25KSep 2020
Center for a New American Security — AI Governance ProjectsCenter for a New American Security$24KOct 2020
Gradient Institute — Australian AI Safety ForumGradient Institute$20KOct 2024
Center for Long-Term Cybersecurity — AI Standards (2022)University of California, Berkeley$20KApr 2022
ETH Zurich — Research on Prompt Injection AttacksETH ZüRich$20KJun 2025
Touro College & University System — AI Governance Legal ResearchTouro College &Amp; University System$20KFeb 2024
Centre for the Governance of AI — Research AssistantGovAI$19KSep 2022
Alignment Research Engineer Accelerator — AI Safety Technical ProgramAlignment Research Engineer Accelerator$19KFeb 2023
Press Shop — Support for Human CompatiblePress Shop$17KJan 2020
Thomas Liao — Foundation Model TrackerFoundation Model Tracker$15KOct 2022
International Conference on Machine Learning — AI Governance WorkshopInternational Conference On Machine Learning$13KJun 2025
Catherine Brewer — OxAI Safety HubOxAI Safety Hub$12KOct 2022
Inclusive Abundance Initiative — Abundance ConferenceInclusive Abundance Initiative$10KOct 2024
GoalsRL — Workshop on Goal Specifications for Reinforcement LearningGoalsRL$7.5KAug 2018
Stanford University — NIPS Workshop on Machine Learning SecurityStanford University$6.8KApr 2018
Stanford University — AI Safety SeminarStanford University$6.5KJan 2020
International Conference on Learning Representations — Machine Learning Paper AwardsInternational Conference On Learning Representations$3.5KMay 2020
International Conference on Learning Representations — Security and Safety in Machine Learning Systems WorkshopInternational Conference On Learning Representations$3KApr 2021