501(c)(3) nonprofit (formerly OpenAI, Inc.) that retains governance control over OpenAI Group PBC through sole authority to appoint and remove all PBC board directors. Holds a 26% equity stake valued at approximately \$130 billion. Operates the Safety and Security Committee with power to halt model releases. Committed \$1 billion in grants for 2026-2027 across health, AI resilience, economic impact, and civil society programs.
Revenue
$5.4 million
Headcount
375
as of Jan 2023
Facts
19
Financial
Equity Stake26%
Equity Value$130 billion
Headcount375
Revenue$5.4 million
People
Founder (text)Elon Musk
Biographical
Notable ForFour focus areas announced March 2026: (1) Life Sciences & Curing Diseases, (2) Jobs and Economic Impact, (3) AI Resilience, (4) Community Programs including People-First AI Fund.
AI Lab Safety CultureApproachAI Lab Safety CultureComprehensive assessment of AI lab safety culture showing systematic failures: no company scored above C+ overall (FLI Winter 2025), all received D/F on existential safety, ~50% of OpenAI safety st...Quality: 62/100Government AI Use MonitoringApproachGovernment AI Use MonitoringSystematic tracking and transparency efforts focused on how governments deploy AI systems, particularly for surveillance, enforcement, and decision-making that affects citizens' rights. Became crit...
Analysis
Planning for Frontier Lab ScalingAnalysisPlanning for Frontier Lab ScalingStrategic framework analyzing how non-lab actors could respond to frontier AI labs deploying $100-300B+ pre-TAI. For philanthropies: analysis of potential shifts from matching spend to maximizing l...Quality: 55/100Frontier Lab Cost StructureAnalysisFrontier Lab Cost StructureAnalysis of capital allocation at frontier AI labs. OpenAI operates on approximately $20B ARR with $14B+ annual costs; Anthropic on approximately $9B ARR with $7-10B costs; Google DeepMind within A...Quality: 53/100Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At $380B valuation (Series G, Feb 2026, $30B raised): $27-76B risk-adjusted EA capital expected. Total funding raised exceeds $...Quality: 65/100Anthropic Founder Pledges: Interventions to Increase Follow-ThroughAnalysisAnthropic Founder Pledges: Interventions to Increase Follow-ThroughEvaluates interventions to make Anthropic founders' 80% donation pledges more likely to be fulfilled. Distinguishes collaborative interventions founders would welcome (DAF tax planning, foundation ...Quality: 45/100Safety Spending at ScaleAnalysisSafety Spending at ScaleModels what AI safety spending could accomplish at different budget levels from $1B to $50B+/year. Current global safety spending (~$500M-1B/year) is 100-600x below capabilities investment. At $5B/...Quality: 55/100
Other
Sam AltmanPersonSam AltmanComprehensive biographical profile of Sam Altman documenting his role as OpenAI CEO, timeline predictions (AGI within presidential term, superintelligence in "few thousand days"), and controversies...Quality: 40/100Bret TaylorPersonBret TaylorChair of the OpenAI Foundation and OpenAI Group PBC board. Co-founder and CEO of Sierra AI. Former co-CEO of Salesforce. Former CTO of Facebook.Adam D'AngeloPersonAdam D'AngeloCo-founder and CEO of Quora. Member of the OpenAI Foundation board and Safety and Security Committee. Former CTO of Facebook.Zico KolterPersonZico KolterChair of the OpenAI Safety and Security Committee. Professor of Computer Science at Carnegie Mellon University. Sits exclusively on the Foundation board (not the PBC board) with full observation ri...Paul NakasonePersonPaul NakasoneMember of the OpenAI Foundation board and Safety and Security Committee. Retired U.S. Army General. Former Director of the National Security Agency (NSA) and Commander of U.S. Cyber Command.Nicole SeligmanPersonNicole SeligmanMember of the OpenAI Foundation board and Safety and Security Committee. Former Executive Vice President and General Counsel of Sony Corporation.
Concepts
Existential Risk from AIConceptExistential Risk from AIHypotheses concerning risks from advanced AI systems that some researchers believe could result in human extinction or permanent global catastrophe, including institutional frameworks developed by ...Quality: 92/100Funders OverviewFunders OverviewOverview of major funders supporting AI safety, existential risk reduction, and longtermist causes. These organizations and individuals collectively provide hundreds of millions of dollars annually...Quality: 3/100
Organizations
AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials ($380B valuation, $14B ARR at Series G growing to $19B by March 2026), safety research (Constitutional AI, mechanistic interpretability...Quality: 74/100Survival and Flourishing FundOrganizationSurvival and Flourishing FundSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100Future of Life InstituteOrganizationFuture of Life InstituteComprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause ...Quality: 46/100
Historical
Deep Learning Revolution EraHistoricalDeep Learning Revolution EraComprehensive timeline documenting 2012-2020 AI capability breakthroughs (AlexNet, AlphaGo, GPT-3) and parallel safety field development, with quantified metrics showing capabilities funding outpac...Quality: 44/100