Skip to content
Longterm Wiki
Navigation
Updated 2026-01-31HistoryData
Page StatusContent
Edited 2 months ago1.9k words110 backlinksUpdated every 3 weeksOverdue by 43 days
50QualityAdequate •32ImportanceReference44ResearchLow
Content7/13
SummaryScheduleEntityEdit history1Overview
Tables1/ ~7Diagrams0/ ~1Int. links11/ ~15Ext. links74/ ~9Footnotes0/ ~6References17/ ~6Quotes0Accuracy0RatingsN:3.5 R:6 A:2 C:7Backlinks110
Change History1
Batch content fixes + stale-facts validator + 2 new validation rules#9246 weeks ago

(fill in)

claude-sonnet-4-6

Issues3
QualityRated 50 but structure suggests 87 (underrated by 37 points)
Links49 links could use <R> components
StaleLast edited 64 days ago - may need review

Machine Intelligence Research Institute (MIRI)

Safety Org

Machine Intelligence Research Institute

Comprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial data showing $5M annual deficit and ~2 year runway. Provides well-sourced analysis of the organization's $25.6M revenue peak (2021), subsequent decline, and strategic pivot away from technical alignment work.

TypeSafety Org
Founded2000
LocationBerkeley, CA
Employees~40
Funding~$5M/year
Related
People
Eliezer YudkowskyNate SoaresPaul ChristianoEliezer YudkowskyNate Soares
Risks
Instrumental ConvergenceCorrigibility FailureSharp Left Turn
Organizations
Alignment Research Center
1.9k words · 110 backlinks

Quick Assessment

DimensionAssessmentEvidence
Historical SignificanceFirst organization to focus on ASI alignment as technical problemAmong first to recognize ASI as most important event in 21st century MIRI About
Current StrategyPolicy advocacy to halt AI developmentMajor 2024 pivot after acknowledging alignment research "extremely unlikely to succeed in time" MIRI About
Research OutputMinimal recent publicationsNear-zero new publications from core researchers between 2018 and 2022 LessWrong
Financial StatusOperating at deficit with ≈2 year runway$4.97M net loss in 2024, $15.24M in net assets ProPublica
Field ImpactControversial but influentialRaised awareness but faced criticism for theoretical approach and failed research programs LessWrong

Overview

The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit research organization based in Berkeley, California, founded in 2000 by Eliezer Yudkowsky with funding from Brian and Sabine Atkins Wikipedia. Originally named the Singularity Institute for Artificial Intelligence (SIAI), MIRI was the first organization to advocate for and work on artificial superintelligence (ASI) alignment as a technical problem MIRI About.

The organization has undergone several dramatic strategic pivots throughout its 24-year history. Initially created to accelerate AI development, MIRI shifted focus in 2005 when Yudkowsky became concerned about superintelligent AI risks Wikipedia. After two decades of technical research, MIRI announced a major strategy pivot in 2024, moving away from alignment research toward policy advocacy aimed at halting the development of increasingly general AI models MIRI About. This shift came after the organization acknowledged that its primary research initiative had "largely failed" MIRI 2024 Update.

With approximately 42 employees ProPublica and an interdisciplinary approach that deliberately hires from computer science, economics, mathematics, and philosophy backgrounds Future of Life Institute, MIRI aligns itself with the principles and objectives of the effective altruism movement Wikipedia.

History

Founding and Early Years (2000-2005)

MIRI was established in 2000 with a paradoxical original mission: accelerating AI development. The organization operated under this goal until 2005, when founder Eliezer Yudkowsky's concerns about superintelligent AI risks prompted a fundamental reorientation toward AI safety Wikipedia. That same year, the organization relocated from Atlanta to Silicon Valley Wikipedia, positioning itself at the heart of the technology industry.

Singularity Summit Era (2006-2012)

Beginning in 2006, MIRI organized the annual Singularity Summit to discuss AI's future and risks, initially in cooperation with Stanford University and with funding from Peter Thiel Wikipedia. These summits became prominent venues for discussing the implications of advanced artificial intelligence and helped raise awareness of AI safety concerns within both academic and technology communities.

In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University Wikipedia, marking the end of this public outreach phase. The following month, in January 2013, the organization adopted its current name: Machine Intelligence Research Institute Wikipedia.

Technical Research Focus (2012-2020)

During this period, MIRI pursued an ambitious agenda focused on mathematical foundations of AI safety. The organization published actively on topics including logical uncertainty and probabilistic reasoning, decision theory and agent foundations, AI alignment and value learning, corrigibility and interruptibility, formal verification of AI systems, and mathematical foundations of safe AI MIRI Publications.

MIRI received significant funding during this era. Coefficient Giving provided $2,652,500 over two years in February 2019 for general support, increasing their annual support from $1.4 million in 2018 to $2.31 million in 2019 Coefficient Giving. In April 2020, Coefficient Giving awarded MIRI its largest grant to date: $7,703,750, with $6.24 million from Coefficient Giving's main funders and $1.46 million from a partnership with BitMEX co-founder Ben Delo Coefficient Giving. At this peak, Coefficient Giving was providing approximately 60% of MIRI's predicted budgets for 2020-2021 Coefficient Giving.

However, the organization experienced a dramatic revenue spike to $25.6 million in 2021 ProPublica, partly due to a $4.3 million Ethereum donation from Vitalik Buterin MIRI.

Strategic Collapse and Pivot (2020-Present)

The 2020 update revealed a critical turning point: MIRI's primary research initiative had "largely failed," prompting years of regrouping MIRI 2024 Update. By 2021, MIRI announced a reduced emphasis on technical research in favor of advocacy and policy influence, citing diminishing returns on alignment progress. This led to near-zero new publications from core researchers between 2018 and 2022 LessWrong.

The organization also became "more pessimistic that such work will have time to bear fruit" regarding technical alignment research without policy interventions MIRI 2024 Update. This assessment culminated in the 2024 announcement of a major strategy pivot away from alignment research entirely, with MIRI's current focus now on attempting to halt the development of increasingly general AI models via discussions with policymakers about the extreme risks artificial superintelligence poses MIRI About.

Current Operations

Key People

No data available.

Organizational Structure

MIRI operates as a 501(c)(3) nonprofit with approximately 42 employees ProPublica. The organization deliberately hires from diverse backgrounds including computer science, economics, mathematics, and philosophy, recognizing that AI safety requires interdisciplinary perspectives Future of Life Institute.

The leadership team includes:

  • Eliezer Yudkowsky - Chair and Head Researcher ($599,970 compensation in 2024) ProPublica
  • Malo Bourgon - CEO ($241,531 compensation in 2024) ProPublica
  • Nate Soares - President ($236,614 compensation in 2024) ProPublica
  • Scott Garrabrant - Employee ($296,735 compensation in 2024) ProPublica
  • Benya Fallenstein - Research Fellow ($239,947 compensation in 2024) ProPublica

Financial Position

MIRI's financial situation has deteriorated significantly from its 2021 peak. The organization reported $1,534,913 in total revenue for 2024, while expenses reached $6,508,701, resulting in a net loss of $4,973,788 ProPublica. Despite this deficit, MIRI maintains $16,493,789 in total assets and $15,242,215 in net assets ProPublica, providing approximately two years of operational runway ProPublica.

Executive compensation represented $3,132,826, or 48.1% of total expenses in 2024 ProPublica. The organization projected spending $5.6 million in 2024 and expects expenses of $6.5 million to $7 million in 2025 ProPublica.

Current Strategy

MIRI's current focus is on attempting to halt the development of increasingly general AI models through discussions with policymakers about the extreme risks artificial superintelligence poses MIRI About. This represents a dramatic departure from the organization's historical emphasis on technical alignment research.

The organization acknowledges the pessimistic nature of this approach, stating that policy efforts are "very unlikely to save us, but all other plans we know of seem even less likely to succeed" MIRI 2024 Update. This reflects a belief that alignment research is "extremely unlikely to succeed in time to prevent an unprecedented catastrophe" MIRI About.

Research Legacy

Publication Timeline

MIRI's research output followed a clear trajectory. Between 2012 and 2016, the organization actively published on topics like logical uncertainty, decision theory, and AI alignment MIRI Publications. However, from 2018 to 2022, core researchers produced near-zero new publications LessWrong, reflecting the organization's acknowledgment that its foundational bet on mathematical formalization had underdelivered relative to capability advances LessWrong.

Research Areas

Research Areas

No data available.

MIRI's technical work focused on six core areas, all aimed at developing mathematical foundations for safe artificial intelligence:

  1. Logical uncertainty and probabilistic reasoning - Developing frameworks for reasoning under logical uncertainty MIRI Publications
  2. Decision theory and agent foundations - Theoretical work on how rational agents should make decisions MIRI Publications
  3. AI alignment and value learning - Methods for ensuring AI systems pursue intended goals MIRI Publications
  4. Corrigibility and interruptibility - Designing systems that can be safely modified or shut down MIRI Publications
  5. Formal verification of AI systems - Mathematical proofs of system properties MIRI Publications
  6. Mathematical foundations of safe AI - Fundamental theoretical work underlying safety approaches MIRI Publications

Assessment of Research Impact

Eliezer Yudkowsky assessed that "the gameboard looks 'incredibly grim' to him, because from his perspective the field has made almost no progress on the alignment problem" LessWrong. This pessimistic evaluation reflects MIRI's acknowledgment that its foundational bet on mathematical formalization had underdelivered relative to capability advances LessWrong.

Despite these internal assessments, MIRI received recognition as a recommended charity from Raising for Effective Giving, which cited the organization's impact potential in preventing "vast amounts of future suffering," the funding gap for AI safety work, and its effective methodology with historical precedent in computer science foundations Raising for Effective Giving.

Criticisms and Controversies

Organizational Criticisms

MIRI has faced allegations of cult-like dynamics, with critics claiming that "MIRI and LW [are] just an Eliezer-worshipping cult" LessWrong. A LessWrong compilation of MIRI criticisms identified Holden Karnofsky's critique as "the best criticism of MIRI as an organisation" LessWrong.

Even major funders expressed reservations. Coefficient Giving's 2016 evaluation included significant concerns about MIRI's Agent Foundations research agenda, though they continued supporting the organization for other reasons Coefficient Giving.

Research Methodology Critiques

Critics have questioned whether theoretical work can be done so far in advance of testing and experimentation LessWrong. This challenge to MIRI's highly theoretical approach proved prescient, as the organization itself later acknowledged that its primary research initiative had "largely failed" MIRI 2024 Update.

Technical Disagreements

Several core technical assumptions have faced criticism:

Generalization thesis vagueness: Critics challenge MIRI's "generalization thesis" as "unsatisfyingly vague" - the idea that smart systems with goal-directedness markers will pick up dangerous varieties through generalization LessWrong.

Goal-directedness concept: Critics note that "whether coding assistants are less 'generally goal-directed' than a hypothetical machine that manipulates users... is actually theoretically undecidable," questioning whether goal-directedness will cause the behaviors MIRI worries about LessWrong.

Policy Position Critiques

MIRI's current advocacy for shutting down AI research has drawn significant pushback. Critics argue this position "would obviously be very difficult, and very damaging (because we don't get the benefits of AI for all time it's shut down)" LessWrong.

Influence and Legacy

MIRI was among the first organizations to recognize the future invention of artificial superintelligence as the most important and potentially catastrophic event in the twenty-first century MIRI About. This early recognition helped establish AI safety as a legitimate field of study and influenced the development of alignment research at major AI laboratories.

The organization's alignment with effective altruism principles Wikipedia and its role in hosting the Singularity Summit contributed to raising awareness of AI safety concerns among philanthropists, researchers, and policymakers. MIRI's work influenced the emergence of AI safety as a funded research area, even as its own technical research program ultimately failed to achieve its goals.

The organization's trajectory - from pioneering AI safety work to acknowledging research failure and pivoting to policy advocacy - represents a cautionary case study in the challenges of theoretical safety research conducted far in advance of the systems it aims to protect against.

Sources and Further Reading

Primary Sources

Financial and Organizational Information

Critical Analyses

General Background

References

1**Future of Humanity Institute**Future of Humanity Institute

The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.

★★★★☆
2AI Alignment ForumAlignment Forum·Blog post

The AI Alignment Forum is a central community platform for technical AI safety and alignment research discussion. The featured post argues against 'reductive utility' (utility functions over possible worlds) and proposes the Jeffrey-Bolker framework as an alternative that avoids ontological crises and computability constraints by grounding preferences in agent-relative events rather than universal physics.

★★★☆☆
3GiveWell MIRI Reviewgivewell.org

This page appears to be a broken or removed GiveWell shallow review of MIRI (Machine Intelligence Research Institute). GiveWell previously conducted a shallow investigation of MIRI as a potential giving opportunity, evaluating its approach to AI safety research and organizational effectiveness.

A comprehensive collection of Eliezer Yudkowsky's blog posts on human rationality, epistemology, cognitive biases, and value theory, originally written 2006-2009 and reorganized into themed sequences. It serves as the foundational intellectual background for LessWrong, MIRI, CFAR, and much of the effective altruist community, introducing Bayesian reasoning, motivated cognition, reductionism, and the nature of goal-directed systems.

★★★☆☆
5MIRI Announces New 'Death With Dignity' StrategyAlignment Forum·Eliezer Yudkowsky·2022·Blog post

Eliezer Yudkowsky's April 2022 post articulating MIRI's shift toward a 'death with dignity' framing, expressing deep pessimism about humanity's ability to solve AI alignment in time, while advising against panic, deception, or abandoning integrity. The post argues that even in a near-hopeless situation, individuals should act with honesty and do what good they can.

★★★☆☆

The official blog of the Machine Intelligence Research Institute (MIRI), covering technical AI safety research including agent foundations, decision theory, logical uncertainty, and alignment. Posts range from research updates and technical results to broader reflections on the AI safety landscape and existential risk from advanced AI systems.

★★★☆☆

LessWrong is a community blog and forum focused on rationality, epistemics, and AI safety, serving as a primary venue for discussion and development of ideas related to AI alignment, decision theory, and existential risk. It hosts foundational technical posts, research updates, and philosophical discussions from prominent researchers including Eliezer Yudkowsky, Paul Christiano, and many others. The platform has been instrumental in developing and disseminating key AI safety concepts.

★★★☆☆

The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.

★★★★☆

Anthropic is an AI safety company focused on building reliable, interpretable, and steerable AI systems. The company conducts frontier AI research and develops Claude, its family of AI assistants, with a stated mission of responsible development and maintenance of advanced AI for long-term human benefit.

★★★★☆
10Embedded Agency (Sequence)Alignment Forum·Blog post

A foundational sequence by Scott Garrabrant and Abram Demski examining the deep theoretical challenges that arise when AI agents are embedded within—rather than external to—the environments they reason about. It addresses decision theory, world-modeling, and alignment under the realistic condition that an agent is itself a physical subsystem of the world it must model and act upon.

★★★☆☆

Eliezer Yudkowsky argues that the FLI open letter calling for a 6-month AI pause is insufficient, contending that without a verified solution to alignment, continuing AI development at any pace risks human extinction. He calls for an indefinite global halt to large AI training runs, enforced internationally, until the alignment problem is solved.

★★★☆☆
12Logical InductorsarXiv·Scott Garrabrant et al.·2016·Paper

This paper introduces logical inductors, a computable algorithm that assigns and refines probabilities to logical statements in formal languages over time, satisfying numerous desirable epistemic properties. The approach resolves longstanding challenges in assigning coherent prior probabilities to logical statements, including self-referential claims, using a criterion motivated by the impossibility of exploiting the algorithm via stock trading strategies. All desirable properties—including timely pattern learning, calibration, and dominating universal semimeasures—follow from a single logical induction criterion.

★★★☆☆

A comprehensive index of all publications from the Machine Intelligence Research Institute (MIRI), covering foundational AI safety research including agent foundations, decision theory, logical uncertainty, and value alignment. This page serves as the primary access point for MIRI's technical and strategic research output spanning over a decade of work.

★★★☆☆

The about page for the Machine Intelligence Research Institute (MIRI), a nonprofit research organization focused on ensuring that the development of smarter-than-human artificial intelligence is beneficial to humanity. MIRI conducts technical AI safety research aimed at making advanced AI systems more aligned with human values and intentions.

★★★☆☆

ProPublica's Nonprofit Explorer page for the Machine Intelligence Research Institute (MIRI), providing publicly available financial and organizational data filed with the IRS. This includes Form 990 filings showing MIRI's revenue, expenses, executive compensation, and programmatic activities over multiple years.

Wikipedia article providing an overview of the Machine Intelligence Research Institute (MIRI), a nonprofit research organization focused on ensuring that artificial general intelligence (AGI) is developed safely and beneficially. It covers MIRI's history, research agenda, key figures, and its role in pioneering the AI safety field.

★★★☆☆

MIRI's new CEO Malo Bourgon outlines a strategic shift in 2024, prioritizing policy advocacy and communications over technical research, driven by extreme pessimism about solving AI alignment in time to prevent human extinction. MIRI now focuses on pushing for international governmental agreements to halt progress toward smarter-than-human AI, while maintaining a reduced research portfolio.

★★★☆☆

Structured Data

28 facts·1 recordView in FactBase →
Revenue
1534913
as of 2024
Headcount
42
as of 2024
Total Funding Raised
15000000
as of 2025
Founded Date
Jan 2000

All Facts

28
Organization
PropertyValueAs OfSource
Founded DateJan 2000
HeadquartersBerkeley, CA
Legal Structure501(c)(3) nonprofit
Financial
PropertyValueAs OfSource
Total Funding Raised150000002025
2 earlier values
2025$55 million
202415242215
Headcount422024
3 earlier values
202442
Dec 202318
202328
Revenue15349132024
5 earlier values
2024$1.5 million
2023$1.9 million
2022$1.9 million
202125600000
2021$25.6 million
Annual Cash Burn65087012024
1 earlier value
2024$6.5 million
Annual Expenses$6.5 million2024
3 earlier values
2023$6.9 million
2022$5.3 million
2021$6.8 million
Net Assets$15.2 million2024
3 earlier values
2023$19.7 million
2022$23.8 million
2021$29.6 million
People
PropertyValueAs OfSource
Founded ByEliezer Yudkowsky
General
PropertyValueAs OfSource
Websitehttp://intelligence.org/

Divisions

1
NameDivisionTypeStatus
MIRI Researchteamactive

Related Wiki Pages

Top Related Pages

Approaches

AI AlignmentAgent Foundations

Analysis

Instrumental Convergence FrameworkCorrigibility Failure PathwaysDonations List WebsiteTimelines Wiki

Policy

Executive Order 14179: Removing Barriers to American Leadership in AI

Key Debates

AI Alignment Research AgendasAI Accident Risk CruxesWhy Alignment Might Be Hard

Risks

Corrigibility Failure

Other

CorrigibilityNate SoaresAI Control

Organizations

Redwood Research

Concepts

Existential Risk from AIEa Epistemic Failures In The Ftx EraSituational AwarenessAutonomous Coding

Historical

Deep Learning Revolution Era