Anthropic is an AI safety company founded in January 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company was created following disagreements with OpenAI's direction, particularly concerns about the pace of commercialization and the shift toward Microsoft partnership.
Revenue
$19 billion
as of Mar 2026
Valuation
380000000000
as of Feb 2026
Headcount
4,074
as of Jan 2026
Total Funding Raised
10750000000
as of Feb 2026
AI Models
15
Key Metrics
Valuation
$380BFeb 2026
Revenue (ARR)
$19BMar 2026
Headcount
4.1KJan 2026
Equity Breakdown
$380Bvaluation
15%
14%
Other
Employee Equity Pool12–18%$57B
Google / Alphabet13–15%$53B
Sam McCandlish2–3%$9.5B
Daniela Amodei2–3%$9.5B
Jared Kaplan2–3%$9.5B
Dario Amodei2–3%$9.5B
Jack Clark2–3%$9.5B
Chris Olah2–3%$9.5B
Tom Brown2–3%$9.5B
Dustin Moskovitz0.8–2.5%$6.3B
Jaan Tallinn0.6–1.7%$4.4B
Based on $380B valuation
Funding Rounds
$52Btotal raised
Per round
Cumulative
Enterprise Market Share
32%Anthropic
Anthropic
32%
OpenAI
27%
Facts
34
Financial
Gross Margin63%
Revenue$19 billion
Secondary Market Valuation$595 billion
Valuation380000000000
Total Funding Raised10750000000
Equity Stake0.8-2.5
Equity Value53000000000-80000000000
Product Revenue2500000000
Employee Tender Offer$5.5 billion
Revenue Guidance20000000000-26000000000
Headcount4,074
Infrastructure Investment50000000000
Annual Cash Burn$3 billion
Enterprise Market Share32%
Retention Rate88
Customer Concentration25
Other
Funding Round30000000000
Philanthropic Capital27000000000-76000000000
2028
Products & Usage
Monthly API Calls25 billion
Business Customers300,000
Monthly Active Users18.9 million
Organization
CountryUnited States
HeadquartersSan Francisco
Founded DateJan 2021
Legal StructurePublic benefit corporation
Safety & Research
AI Safety LevelASL-3 (Opus 4, Opus 4.5, Opus 4.6), ASL-2 (Sonnet, Haiku)
Anthropic Core ViewsSafety AgendaAnthropic Core ViewsAnthropic allocates 15-25% of R&D (~$100-200M annually) to safety research including the world's largest interpretability team (40-60 researchers), while maintaining $5B+ revenue by 2025. Their RSP...Quality: 62/100
Approaches
Weak-to-Strong GeneralizationApproachWeak-to-Strong GeneralizationWeak-to-strong generalization tests whether weak supervisors can elicit good behavior from stronger AI systems. OpenAI's ICML 2024 experiments show 80% Performance Gap Recovery on NLP tasks with co...Quality: 91/100Constitutional AIApproachConstitutional AIConstitutional AI is Anthropic's methodology using explicit principles and AI-generated feedback (RLAIF) to train safer models, achieving 3-10x improvements in harmlessness while maintaining helpfu...Quality: 70/100
Analysis
Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At $380B valuation (Series G, Feb 2026, $30B raised): $27-76B risk-adjusted EA capital expected. Total funding raised exceeds $...Quality: 65/100
Policy
Voluntary AI Safety CommitmentsPolicyVoluntary AI Safety CommitmentsComprehensive empirical analysis of voluntary AI safety commitments showing 53% mean compliance rate across 30 indicators (ranging from 13% for Apple to 83% for OpenAI), with strongest adoption in ...Quality: 91/100California SB 53PolicyCalifornia SB 53California SB 53 represents the first U.S. state law specifically targeting frontier AI safety through transparency requirements, incident reporting, and whistleblower protections, though it makes ...Quality: 73/100
Other
Anthropic StakeholdersTableAnthropic StakeholdersConsolidated ownership table for Anthropic at $380B valuation (Series G, Feb 2026). Seven co-founders hold 2-3% each ($7.6-11.4B per founder, $53-80B total). Google owns ~14% ($53B), Amazon holds a...Quality: 60/100Dario AmodeiPersonDario AmodeiComprehensive biographical profile of Anthropic CEO Dario Amodei documenting his competitive safety development philosophy, 10-25% catastrophic risk estimate, 2026-2030 AGI timeline, and Constituti...Quality: 41/100Scalable OversightResearch AreaScalable OversightProcess supervision achieves 78.2% accuracy on MATH benchmarks (vs 72.4% outcome-based) and is deployed in OpenAI's o1 models, while debate shows 60-80% accuracy on factual questions with +4% impro...Quality: 68/100InterpretabilityResearch AreaInterpretabilityMechanistic interpretability has extracted 34M+ interpretable features from Claude 3 Sonnet with 90% automated labeling accuracy and demonstrated 75-85% success in causal validation, though less th...Quality: 66/100ClaudeAi ModelClaudeClaude is Anthropic's family of AI assistants, first released in March 2023. The product line spans three tiers — Haiku (fast/cheap), Sonnet (balanced), and Opus (most capable) — with major generat...
Organizations
US AI Safety InstituteOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with $10M budget (FY2025 request $82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI a...Quality: 91/100
Historical
Anthropic-Pentagon Standoff (2026)EventAnthropic-Pentagon Standoff (2026)Comprehensive analysis of the February 2026 confrontation between Anthropic and the US government. Triggered when Claude AI was used in the January 2026 Venezuela raid via Palantir, Anthropic refus...Quality: 70/100Mainstream EraHistoricalMainstream EraComprehensive timeline of AI safety's transition from niche to mainstream (2020-present), documenting ChatGPT's unprecedented growth (100M users in 2 months), the OpenAI governance crisis, and firs...Quality: 42/100
Concepts
AI Welfare and Digital MindsConceptAI Welfare and Digital MindsAI welfare represents an emerging field examining whether AI systems deserve moral consideration based on consciousness, sentience, or agency, with growing institutional support from organizations ...Quality: 63/100Agentic AICapabilityAgentic AIAnalysis of agentic AI capabilities and deployment challenges, documenting industry forecasts (40% of enterprise apps by 2026, $199B market by 2034) alongside implementation difficulties (40%+ proj...Quality: 68/100Situational AwarenessCapabilitySituational AwarenessComprehensive analysis of situational awareness in AI systems, documenting that Claude 3 Opus fakes alignment 12% baseline (78% post-RL), 5 of 6 frontier models demonstrate scheming capabilities, a...Quality: 67/100Large Language ModelsConceptLarge Language ModelsComprehensive assessment of LLM capabilities showing training costs growing 2.4x/year ($78-191M for frontier models, though DeepSeek achieved near-parity at $6M), o3 reaching 91.6% on AIME and 87.5...Quality: 62/100
Key Debates
AI Alignment Research AgendasCruxAI Alignment Research AgendasComprehensive comparison of major AI safety research agendas ($100M+ Anthropic, $50M+ DeepMind, $5-10M nonprofits) with detailed funding, team sizes, and failure mode coverage (25-65% per agenda). ...Quality: 69/100Technical AI Safety ResearchCruxTechnical AI Safety ResearchTechnical AI safety research encompasses six major agendas (mechanistic interpretability, scalable oversight, AI control, evaluations, agent foundations, and robustness) with 500+ researchers and $...Quality: 66/100