Skip to content
Longterm Wiki
Back

Frontier Model Forum - New AI Safety Fund Grantees

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Frontier Model Forum

This announcement from the Frontier Model Forum (a consortium of major AI labs) is relevant to understanding industry-level coordination on AI safety funding and which research areas are being prioritized by leading AI companies.

Metadata

Importance: 45/100press releasenews

Summary

The Frontier Model Forum announces new grantees for its AI Safety Fund, which supports independent research into AI safety challenges. The fund, established by major AI labs including Anthropic, Google, Microsoft, and OpenAI, aims to advance technical and governance research to make frontier AI systems safer. This announcement highlights specific research projects and organizations receiving funding.

Key Points

  • The AI Safety Fund is a joint initiative by leading frontier AI companies to pool resources for independent safety research.
  • Grants support a range of technical AI safety research areas including evaluation, red-teaming, and alignment techniques.
  • The announcement reflects industry-level coordination on funding safety research outside of individual company labs.
  • Grantees represent academic institutions and independent research organizations working on frontier AI safety problems.
  • The fund represents an attempt by industry to demonstrate commitment to responsible AI development through external research investment.

Cited by 1 page

PageTypeQuality
Frontier Model ForumOrganization58.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20264 KB
[Skip to content](https://www.frontiermodelforum.org/updates/announcement-of-new-ai-safety-fund-grantees/#wp--skip-link--target)

## Announcement of New AI Safety Fund Grantees

By:

Frontier Model Forum

Posted on:

11th December 2025

Today we are announcing a new cohort of 11 grantees who have received more than $5 million through the AI Safety Fund (AISF). As frontier AI systems become more powerful and widely deployed, advancing our understanding of them and building robust safety tools is essential – which is why the AISF issued several requests for proposals late last year in [Biosecurity](https://aisfund.org/wp-content/uploads/2024/11/AISF-Biosecurity-RFP.pdf) and [Cybersecurity](https://aisfund.org/wp-content/uploads/2024/11/AISF-Cybersecurity-RFP.pdf), as well as [AI Agent Evaluation and Synthetic Content](https://aisfund.org/wp-content/uploads/2025/01/AISF-AI-Agent-Evals-RFP-1.pdf).

Spanning diverse approaches to frontier AI safety and security, the funded projects include:

- Apollo Research, _Building black box scheming monitors for Frontier AI agents_
- California Institute of Technology, _AI-driven Detection of Protein Mimetic Biothreats with BioSentinel_
- Institute for Decentralized AI (part of Cosmos Institute), _Scalable, Decentralized Oversight for Multi-Agent Networks_
- Faculty AI, _Automated Red-Teaming for Biosecurity Risks_
- FAR.AI, _Quantifying the Safety-Adversary Gap in Large Language Models_
- FutureHouse, Inc., _Pioneering AI-Driven Experimental Design: Benchmarks for Responsible Innovation_
- Morgan State University, _Evaluating AI-Assisted Cybersecurity Operations: Comparative Analysis of Human Performance with and without AI Tools_
- Nemesys Insights LLC, _ICS Benchmark and Human Uplift Study_
- SecureBio, _Evaluations to Assess Agent AIs’ Execution of Tasks That Could Enable Large-scale Harm_
- University of Illinois Urbana – Champaign, _Cybersecurity Risk Evaluations of AI Agents with Computer Interaction Capabilities_
- University of Toronto, _Analyzing the Emergent Role of Sanctioning in Regulating Multi-Agent LLM Systems_

The projects were selected from over 100 competitive proposals through a rigorous review process. As with the [initial cohort](https://aisfund.org/update-on-the-aisf-grantmaking-and-upcoming-funding-opportunity/) of grantees, we are excited to support each of the AISF recipients and look forward to their scientific contributions and impact.

**Update on the AI Safety Fund**

With over $10 million in funding, the AISF was established in late 2023 as a collaborative initiative among leading AI developers and philanthropic partners. It aims to support and expand the field of AI safety research to promote the responsible development and deployment of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.

The Meridian Institute initially managed and oversaw the fund. Support for the AISF came from the founding members of the Frontier Mode

... (truncated, 4 KB total)
Resource ID: e159959847bc87f7 | Stable ID: NDM2ZDk0Nz