Skip to content
Longterm Wiki
Navigation
Updated 2026-03-02HistoryData
Page StatusContent
Edited 5 weeks ago1.1k words27 backlinksUpdated every 6 weeksDue in 2 weeks
53QualityAdequate •27.5ImportancePeripheral40ResearchLow
Content11/13
SummaryScheduleEntityEdit history1Overview
Tables6/ ~4Diagrams1Int. links20/ ~8Ext. links16/ ~5Footnotes15/ ~3References5/ ~3Quotes0Accuracy0RatingsN:2.5 R:6.5 A:3 C:7.5Backlinks27
Change History1
Source unsourced facts7 weeks ago

Sourced 25 of 30 previously unsourced facts across all 4 fact files (anthropic, sam-altman, openai, jaan-tallinn). Created 21 new resource entries in news-media.yaml and ai-labs.yaml with proper SHA256-based IDs. Added 8 new publications (Bloomberg, The Information, Quartz, Benzinga, Britannica, World, Sherwood News). Fixed date accuracy issues (Worldcoin stats from 2024 to 2025-05, OpenAI revenue from Oct to Jun 2024) and improved notes. Source coverage improved from 29% to 88%.

opus-4-6 · ~45min

Issues2
QualityRated 53 but structure suggests 93 (underrated by 40 points)
Links10 links could use <R> components

Jaan Tallinn

Person

Jaan Tallinn

Profile of Jaan Tallinn documenting $150M+ lifetime AI safety giving (86% of $51M in 2024), primarily through SFF ($34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (2014), led Anthropic's $124M Series A (2021), early DeepMind investor.

RoleCo-founder of Skype, SFF co-founder
Related
Organizations
CSER (Centre for the Study of Existential Risk)
People
Max Tegmark
1.1k words · 27 backlinks

Quick Assessment

DimensionAssessmentEvidence
Giving ScaleMajor Individual Donor$51M in 2024; $150M+ estimated lifetime
Primary VehicleSurvival and Flourishing Fund (SFF)S-process algorithmic allocation; $34.33M distributed in a 2025 grant round
AI Safety Focus≈86% of givingRemainder: biosecurity (≈7%), forecasting, fertility, longevity, other GCR
Wealth SourceTech Exits + InvestmentsSkype (sold 2005), Kazaa; DeepMind (acquired 2014); Anthropic Series A (2021)
Organizations FoundedCSER, FLICentre for the Study of Existential Risk (Cambridge, 2012); Future of Life Institute (2014)
SourceLink
Wikipediaen.wikipedia.org
LessWrong2024 Philanthropy Overview
FLI Profilefutureoflife.org

Personal Details

AttributeDetails
BornFebruary 14, 1972, Tallinn, Estonia
EducationBSc in Theoretical Physics, University of Tartu (1996)
FamilyMarried with six children (spouse's name not publicly disclosed)1
Estimated Net WorthApproximately $900 million (2019 estimate; significant crypto holdings)2
Board PositionsCenter for AI Safety (Board), UN AI Advisory Body, Bulletin of the Atomic Scientists (Board of Sponsors)

Overview

Jaan Tallinn is an Estonian programmer, entrepreneur, and philanthropist. He co-founded Skype (2003, acquired by Microsoft for $8.5B in 2011) and developed the FastTrack P2P protocol behind Kazaa. After reading Eliezer Yudkowsky's writings on AI risk in 2009, he redirected his philanthropy toward existential risk reduction, donating an estimated $150M+ to AI safety and related causes.3

He co-founded two major organizations — the Centre for the Study of Existential Risk (CSER) at Cambridge in 2012, and the Future of Life Institute (FLI) in 2014. He was an early DeepMind investor and board member (2011), and led Anthropic's $124M Series A at a $550M pre-money valuation (2021), taking a board observer role.4

Tallinn describes his AI investment rationale as "having a voice of concern from the inside," though he has acknowledged the tension: "On the one hand, it's great to have this safety-focused thing. On the other hand, this is AI Proliferation." Whether minority board observer positions translate into meaningful safety influence is not established by independent evidence.3

His 2024 giving of approximately $51M concluded a formal five-year pledge (2020–2024) denominated in ETH (20,000 ETH/year). As of early 2025, no successor multi-year pledge has been announced, though he committed at least $10M to the 2025 SFF round.5

Philanthropic Activities

Diagram (loading…)
flowchart TD
  JT[Jaan Tallinn<br/>\$150M+ estimated lifetime giving] --> SFF[Survival and Flourishing Fund<br/>Primary vehicle since 2019]
  JT --> LIGHT[Lightspeed Grants<br/>\$8M+ distributed since 2022]
  JT --> DIRECT[Direct Grants<br/>MIRI, researchers]
  JT --> FOUND[Co-Founded Organizations]

  SFF --> AI_SAFE[AI Safety ~86%<br/>MIRI, ARC, CAIS, Apollo, METR, FAR AI]
  SFF --> BIO[Biosecurity ~7%<br/>SecureBio, pandemic prevention]
  SFF --> OTHER[Other ~7%<br/>Forecasting, fertility, longevity, GCR]

  FOUND --> CSER[CSER<br/>Cambridge 2012<br/>~\$200K seed]
  FOUND --> FLI[FLI<br/>Boston 2014]

  style JT fill:#e6f3ff
  style SFF fill:#ccffcc
  style LIGHT fill:#ccffcc
  style FOUND fill:#ffffcc
  style AI_SAFE fill:#ffcccc

Key Giving Milestones

YearAmountNotes
2012≈$200KCSER seed funding6
2019≈$2MSFF established
2020$10–15M5-year pledge began (20K ETH/year)7
2021$15–20MAlso led Anthropic $124M Series A (investment, not giving)4
2022$25–30MLightspeed Grants began (formally launched June 2023)8
2023$30–35MPost-FTX expansion to fill funding gaps
2024$51M+Concluded 5-year commitment5
2025$34.33MSFF grant round (86% to AI safety)9

Primary Vehicles

Survival and Flourishing Fund (SFF) — Tallinn's primary giving vehicle since 2019. Uses the S-process algorithmic allocation with a network of recommenders (12 in the 2024 round). The 2025 round distributed $34.33M: 86% to AI safety, 7% to biosecurity, 7% to other causes. Notable recipients include MIRI, Center for AI Safety, Apollo Research, METR, FAR AI, Palisade Research, and SecureBio.9

Lightspeed Grants — Fast-turnaround grantmaking run by Lightcone Infrastructure, primarily funded by Tallinn. Approximately $8M distributed since 2022.8

AI Investments

Tallinn has invested over $100M in 100–200 technology startups through Metaplanet Holdings.10

InvestmentYearDetails
DeepMind2011Series A investor and board member; Google acquired for $400–650M (2014)11
Anthropic2021Led $124M Series A; board observer. See Anthropic (Funder) for stake analysis4

Public Advocacy

Tallinn has been an active advocate for AI safety governance, serving on the UN AI Advisory Body and the EU Commission's High-Level Expert Group on AI. He has called for liability laws covering "both the users and developers of AI technology accountable for harms and risks produced by AI, including near-miss incidents."12

Key public positions signed:

  • 2023: FLI open letter calling for 6-month pause on training beyond GPT-4 (30,000+ signatures)
  • 2023: CAIS extinction risk statement
  • 2025: FLI statement calling for prohibition on superintelligence development until provably safe13

Criticisms

Capabilities acceleration: Critics argue investing in AI companies like Anthropic accelerates the technologies Tallinn views as dangerous. Tallinn has acknowledged: "this is proliferation... creating Anthropic might add to the competitive landscape, thus speeding development."3

Near-term vs. speculative risk: AI ethics researchers Timnit Gebru and Margaret Mitchell argued the 2023 FLI pause letter ignored "active harms" from existing AI systems. Gebru and Torres have characterized Tallinn as a subscriber to the "TESCREAL bundle" of ideologies, arguing these frameworks distort AI research priorities.1415

Influence concentration: SFF's S-process uses a small network of recommenders (12 in the 2024 round), concentrating significant influence over the AI safety field in a tightly connected group.

Key Uncertainties

UncertaintyDescription
Post-Pledge GivingNo formal multi-year pledge post-2024; committed at least $10M to 2025 SFF round5
Inside InfluenceWhether board observer role actually changes Anthropic's decisions — no independent verification
Wealth VariabilitySignificant crypto holdings mean giving capacity fluctuates with ETH/BTC prices

Sources

Footnotes

  1. Wikipedia and Lifeboat Foundation profile confirm married with six children; spouse name not publicly disclosed.

  2. "He's Worried A.I. May Destroy Humanity", Fortune, November 2020.

  3. "Co-founder of Skype invested in hot AI startups but thinks he failed", Semafor, April 2023. 2 3

  4. Anthropic raises $124 million, Anthropic, May 2021. 2 3

  5. Jaan Tallinn, "2024 Philanthropy Overview", LessWrong, early 2025. 2 3

  6. Centre for the Study of Existential Risk — Wikipedia.

  7. Jaan Tallinn, "Philanthropic Pledge", LessWrong, February 2020.

  8. Lightspeed Grants, launched June 2023. 2

  9. SFF 2025 funding by cause area — EA Forum. 2

  10. "Skype co-founder reveals he's invested over $130 million into start-ups", CNBC, November 2020.

  11. Google DeepMind — Wikipedia; acquisition price reported between $400M and $650M.

  12. Tallinn's statements on AI liability and datacenter regulation, 2023.

  13. Jaan Tallinn — Wikipedia, updated 2025.

  14. Margaret Mitchell and others critiqued the FLI pause letter for ignoring present AI harms, 2023.

  15. Timnit Gebru and Émile P. Torres, TESCREAL critique, 2023.

References

CNBC profiles Jaan Tallinn, co-founder of Skype, revealing he has invested over $130 million into technology start-ups, with a significant focus on AI safety and existential risk organizations. Tallinn is a prominent philanthropist and investor in the AI safety space, having co-founded and funded organizations such as CSER and MIRI.

★★★☆☆
2Jaan Tallinn - WikipediaWikipedia·Reference

Wikipedia biography of Jaan Tallinn, Estonian software engineer and co-founder of Skype, who became a prominent AI safety philanthropist and activist. He co-founded the Centre for the Study of Existential Risk (CSER) and the Future of Life Institute (FLI), and has been a major funder of AI safety research. His transition from tech entrepreneur to existential risk advocate makes him a significant figure in the AI safety community.

★★★☆☆

The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.

★★★★☆
4Google DeepMind (Wikipedia)Wikipedia·Reference

Wikipedia's reference article on Google DeepMind, the AI research laboratory formed from the merger of Google Brain and DeepMind in 2023. It covers the organization's history, structure, key research areas, notable achievements, and leadership, providing a factual overview of one of the world's leading AI research organizations.

★★★☆☆

Anthropic announced its $124 million Series A funding round in May 2021, marking the company's public launch as an AI safety and research organization. The funding was intended to support development of more reliable and interpretable AI systems with a focus on safety.

★★★★☆

Structured Data

6 facts·2 recordsView in FactBase →
Birth Year
1972

All Facts

6
Financial
PropertyValueAs OfSource
Valuation9000000002025
Total Funding Raised200000002024
Biographical
PropertyValueAs OfSource
Net Worth$900 million2025
Birth Year1972
EducationUniversity of Tartu; Gustav Adolf Grammar School
Other
PropertyValueAs OfSource
2003

Career History

2
OrganizationTitleStartEnd
SkypeCo-founder, Lead Developer20032008
Future of Life InstituteCo-founder2014

Related Wiki Pages

Top Related Pages

Organizations

Frontier Model ForumAnthropicPalisade ResearchLessWrongFAR AIMETR

Analysis

Elon Musk (Funder)Model Organisms of MisalignmentAI Risk Portfolio AnalysisAnthropic IPO

Other

Dustin MoskovitzMax Tegmark

Approaches

AI Safety Intervention PortfolioAI Safety Field Building Analysis

Concepts

EA Shareholder Diversification from AnthropicFunders Overview

Risks

AI Proliferation

Historical

Mainstream Era