Skip to content
Longterm Wiki
Back

FAR AI Grantmaking Program

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: FAR AI

FAR AI is an AI safety research organization; this page describes their grantmaking program, relevant for researchers seeking funding or those mapping the AI safety funding landscape.

Metadata

Importance: 35/100homepage

Summary

FAR AI (Foundation for Alignment Research) runs a grantmaking program to fund AI safety research and related work. The program supports projects aligned with FAR AI's mission of reducing risks from advanced AI systems, providing financial resources to researchers and organizations working on technical and governance challenges in AI safety.

Key Points

  • FAR AI offers grants to support AI safety research projects and researchers
  • Funding targets work aligned with reducing existential and catastrophic risks from advanced AI
  • Program covers both technical AI safety and potentially governance-related work
  • Represents an organizational funding mechanism within the broader AI safety ecosystem

Cited by 1 page

PageTypeQuality
FAR AIOrganization76.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20264 KB
Grantmaking 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 We updated our website and would love your feedback! 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Events 
 
 
 
 
 
 Events 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Programs 
 
 
 
 
 
 Programs 
 
 
 
 
 
 
 
 
 Blog 
 
 
 
 
 
 About 
 
 
 
 
 
 About 
 
 
 
 
 
 
 
 
 
 Careers Donate 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Grantmaking

 
 Funding groundbreaking research in AI safety

 
 Sign up to our newsletter for updates 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 FAR.AI Grant Program

 FAR.AI supports academics and independent researchers in developing innovative solutions to critical AI risks through our targeted grantmaking program. Currently, due to limited evaluation capacity, we are only able to consider researchers nominated by experts with a strong track record. We plan to launch public requests for proposals (RFPs) soon, focused on high-impact research areas. Our grantmaking is funded by a $12 million grant generously provided by Open Philanthropy. 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Grants:

 
 
 Failure Modes in Superhuman Systems 
 Florian Tramer, ETH Zurich 
 
 Broad project examining robustness across four vectors: data poisoning, consistency checks, model stealing, and prompt injection. 
 
 
 Comprehensive Red-Teaming Framework 
 Wenbo Guo, UC Santa Barbara 
 
 Building automated testing systems for LLM alignment against both training-phase threats and testing-phase threats, with a focus on developing agent-based systems that can generate adversarial prompts. 
 
 
 Explaining Superhuman AI Decisions 
 Nicholas Tomlin, UC Berkeley 
 
 Using weak-to-strong generalization to explain superhuman AI systems’ decisions, focusing on domains like chess/Go where superhuman AI already exists. 
 
 
 Securing Alignment 
 Ashwinee Panda, University of Maryland College Park 
 
 Developing methods to make alignment more secure against jailbreaks, prefilling attacks, and finetuning attacks, with approaches spanning the entire model lifecycle. 
 
 
 
 
 
 
 Explaining Superhuman AI Decisions 
 Nicholas Tomlin, UC Berkeley 
 
 
 
 Using weak-to-strong generalization to explain superhuman AI systems’ decisions, focusing on domains like chess/Go where superhuman AI already exists.

 
 
 
 
 
 Comprehensive Red-Teaming Framework 
 Wenbo Guo, UC Santa Barbara 
 
 
 
 Building automated testing systems for LLM alignment against both training-phase threats and testing-phase threats, with a focus on developing agent-based systems that can generate adversarial prompts.

 
 
 
 
 
 Securing Alignment 
 Ashwinee Panda, University of Maryland College Park 
 
 
 
 Developing methods to make alignment more secure against jailbreaks, prefilling attacks, and finetuning attacks, with approaches spanning the entire model lifecycle.

 
 
 
 
 
 Failure Modes in Superhuman Systems 
 Florian Tramer, ETH Zurich 
 
 
 
 Broad project examining robustness across four vectors: data poisoning, consisten

... (truncated, 4 KB total)
Resource ID: f39e450eac7bbaa9 | Stable ID: ZWQwMmU4Nj