Machine Intelligence Research Institute
miri (E202)← Back to pagePath: /knowledge-base/organizations/miri/
Page Metadata
{
"id": "miri",
"numericId": null,
"path": "/knowledge-base/organizations/miri/",
"filePath": "knowledge-base/organizations/miri.mdx",
"title": "MIRI (Machine Intelligence Research Institute)",
"quality": 50,
"importance": 37,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-01-31",
"llmSummary": "Comprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial data showing $5M annual deficit and ~2 year runway. Provides well-sourced analysis of the organization's $25.6M revenue peak (2021), subsequent decline, and strategic pivot away from technical alignment work.",
"structuredSummary": null,
"description": "A pioneering AI safety research organization that shifted from technical alignment research to policy advocacy, founded by Eliezer Yudkowsky in 2000 as the first organization to work on artificial superintelligence alignment.",
"ratings": {
"novelty": 3.5,
"rigor": 6,
"actionability": 2,
"completeness": 7
},
"category": "organizations",
"subcategory": "safety-orgs",
"clusters": [
"community",
"ai-safety"
],
"metrics": {
"wordCount": 1870,
"tableCount": 1,
"diagramCount": 0,
"internalLinks": 8,
"externalLinks": 74,
"footnoteCount": 0,
"bulletRatio": 0.23,
"sectionCount": 26,
"hasOverview": true,
"structuralScore": 11
},
"suggestedQuality": 73,
"updateFrequency": 21,
"evergreen": true,
"wordCount": 1870,
"unconvertedLinks": [
{
"text": "MIRI 2024 Update",
"url": "https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/",
"resourceId": "435b669c11e07d8f",
"resourceTitle": "MIRI's 2024 assessment"
},
{
"text": "MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
},
{
"text": "MIRI 2024 Update",
"url": "https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/",
"resourceId": "435b669c11e07d8f",
"resourceTitle": "MIRI's 2024 assessment"
},
{
"text": "MIRI 2024 Update",
"url": "https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/",
"resourceId": "435b669c11e07d8f",
"resourceTitle": "MIRI's 2024 assessment"
},
{
"text": "MIRI 2024 Update",
"url": "https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/",
"resourceId": "435b669c11e07d8f",
"resourceTitle": "MIRI's 2024 assessment"
},
{
"text": "MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
},
{
"text": "MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
},
{
"text": "MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
},
{
"text": "MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
},
{
"text": "MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
},
{
"text": "MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
},
{
"text": "MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
},
{
"text": "MIRI 2024 Update",
"url": "https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/",
"resourceId": "435b669c11e07d8f",
"resourceTitle": "MIRI's 2024 assessment"
},
{
"text": "MIRI 2024 Mission and Strategy Update",
"url": "https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/",
"resourceId": "435b669c11e07d8f",
"resourceTitle": "MIRI's 2024 assessment"
},
{
"text": "All MIRI Publications",
"url": "https://intelligence.org/all-publications/",
"resourceId": "fc77e6a5087586a3",
"resourceTitle": "MIRI Papers"
}
],
"unconvertedLinkCount": 15,
"convertedLinkCount": 0,
"backlinkCount": 10,
"redundancy": {
"maxSimilarity": 14,
"similarPages": [
{
"id": "coefficient-giving",
"title": "Coefficient Giving",
"path": "/knowledge-base/organizations/coefficient-giving/",
"similarity": 14
},
{
"id": "lesswrong",
"title": "LessWrong",
"path": "/knowledge-base/organizations/lesswrong/",
"similarity": 14
},
{
"id": "pause-ai",
"title": "Pause AI",
"path": "/knowledge-base/organizations/pause-ai/",
"similarity": 14
},
{
"id": "research-agendas",
"title": "AI Alignment Research Agenda Comparison",
"path": "/knowledge-base/responses/research-agendas/",
"similarity": 14
},
{
"id": "ai-futures-project",
"title": "AI Futures Project",
"path": "/knowledge-base/organizations/ai-futures-project/",
"similarity": 13
}
]
}
}Entity Data
{
"id": "miri",
"type": "organization",
"title": "Machine Intelligence Research Institute",
"description": "The Machine Intelligence Research Institute (MIRI) is one of the oldest organizations focused on AI existential risk, founded in 2000 as the Singularity Institute for Artificial Intelligence (SIAI).",
"tags": [
"agent-foundations",
"decision-theory",
"corrigibility",
"instrumental-convergence",
"embedded-agency",
"governance",
"logical-uncertainty",
"rationalist-community",
"lesswrong",
"sharp-left-turn",
"security-mindset",
"deconfusion"
],
"relatedEntries": [
{
"id": "eliezer-yudkowsky",
"type": "researcher"
},
{
"id": "nate-soares",
"type": "researcher"
},
{
"id": "paul-christiano",
"type": "researcher"
},
{
"id": "instrumental-convergence",
"type": "risk"
},
{
"id": "corrigibility-failure",
"type": "risk"
},
{
"id": "sharp-left-turn",
"type": "risk"
},
{
"id": "compute-governance",
"type": "policies"
},
{
"id": "arc",
"type": "organization"
}
],
"sources": [
{
"title": "MIRI Website",
"url": "https://intelligence.org"
},
{
"title": "MIRI 2023 Strategy Update",
"url": "https://intelligence.org/2023/03/09/miri-announces-new-death-with-dignity-strategy/"
},
{
"title": "Risks from Learned Optimization (Hubinger et al.)",
"url": "https://arxiv.org/abs/1906.01820"
},
{
"title": "Logical Induction Paper",
"url": "https://arxiv.org/abs/1609.03543"
},
{
"title": "Embedded Agency (Demski, Garrabrant)",
"url": "https://intelligence.org/2018/10/29/embedded-agency/"
},
{
"title": "LessWrong Sequences",
"url": "https://www.lesswrong.com/sequences"
},
{
"title": "Eliezer Yudkowsky TIME Op-Ed",
"url": "https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/"
},
{
"title": "Agent Foundations Research",
"url": "https://intelligence.org/research-guide/"
},
{
"title": "Facing the Intelligence Explosion (Muehlhauser)",
"url": "https://intelligence.org/files/IE-EI.pdf"
},
{
"title": "MIRI on GiveWell",
"url": "https://www.givewell.org/charities/machine-intelligence-research-institute"
}
],
"lastUpdated": "2025-12",
"website": "https://intelligence.org",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
{
"wikipedia": "https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute",
"lesswrong": "https://www.lesswrong.com/tag/machine-intelligence-research-institute-miri",
"wikidata": "https://www.wikidata.org/wiki/Q2040269"
}Backlinks (10)
| id | title | type | relationship |
|---|---|---|---|
| research-agendas | AI Alignment Research Agendas | crux | — |
| accident-risks | AI Accident Risk Cruxes | crux | — |
| miri-era | The MIRI Era | historical | — |
| instrumental-convergence-framework | Instrumental Convergence Framework | model | research |
| arc | ARC | organization | — |
| redwood | Redwood Research | organization | — |
| eliezer-yudkowsky | Eliezer Yudkowsky | researcher | — |
| instrumental-convergence | Instrumental Convergence | risk | — |
| mesa-optimization | Mesa-Optimization | risk | — |
| sharp-left-turn | Sharp Left Turn | risk | — |
Frontmatter
{
"title": "MIRI (Machine Intelligence Research Institute)",
"description": "A pioneering AI safety research organization that shifted from technical alignment research to policy advocacy, founded by Eliezer Yudkowsky in 2000 as the first organization to work on artificial superintelligence alignment.",
"sidebar": {
"order": 10
},
"quality": 50,
"lastEdited": "2026-01-31",
"importance": 37,
"update_frequency": 21,
"llmSummary": "Comprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial data showing $5M annual deficit and ~2 year runway. Provides well-sourced analysis of the organization's $25.6M revenue peak (2021), subsequent decline, and strategic pivot away from technical alignment work.",
"ratings": {
"novelty": 3.5,
"rigor": 6,
"actionability": 2,
"completeness": 7
},
"clusters": [
"community",
"ai-safety"
],
"subcategory": "safety-orgs",
"entityType": "organization"
}Raw MDX Source
---
title: MIRI (Machine Intelligence Research Institute)
description: A pioneering AI safety research organization that shifted from technical alignment research to policy advocacy, founded by Eliezer Yudkowsky in 2000 as the first organization to work on artificial superintelligence alignment.
sidebar:
order: 10
quality: 50
lastEdited: "2026-01-31"
importance: 37
update_frequency: 21
llmSummary: Comprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial data showing $5M annual deficit and ~2 year runway. Provides well-sourced analysis of the organization's $25.6M revenue peak (2021), subsequent decline, and strategic pivot away from technical alignment work.
ratings:
novelty: 3.5
rigor: 6
actionability: 2
completeness: 7
clusters:
- community
- ai-safety
subcategory: safety-orgs
entityType: organization
---
import {DataInfoBox, EntityLink, DataExternalLinks} from '@components/wiki';
<DataExternalLinks pageId="miri" />
<DataInfoBox entityId="E202" />
## Quick Assessment
| Dimension | Assessment | Evidence |
|-----------|------------|----------|
| **Historical Significance** | First organization to focus on ASI alignment as technical problem | Among first to recognize ASI as most important event in 21st century [MIRI About](https://intelligence.org/about/) |
| **Current Strategy** | Policy advocacy to halt AI development | Major 2024 pivot after acknowledging alignment research "extremely unlikely to succeed in time" [MIRI About](https://intelligence.org/about/) |
| **Research Output** | Minimal recent publications | Near-zero new publications from core researchers between 2018 and 2022 [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics) |
| **Financial Status** | Operating at deficit with ≈2 year runway | \$4.97M net loss in 2024, \$15.24M in net assets [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917) |
| **Field Impact** | Controversial but influential | Raised awareness but faced criticism for theoretical approach and failed research programs [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics) |
## Overview
The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit research organization based in Berkeley, California, founded in 2000 by <EntityLink id="E114">Eliezer Yudkowsky</EntityLink> with funding from Brian and Sabine Atkins [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute). Originally named the Singularity Institute for Artificial Intelligence (SIAI), MIRI was the first organization to advocate for and work on artificial superintelligence (ASI) alignment as a technical problem [MIRI About](https://intelligence.org/about/).
The organization has undergone several dramatic strategic pivots throughout its 24-year history. Initially created to accelerate AI development, MIRI shifted focus in 2005 when Yudkowsky became concerned about superintelligent AI risks [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute). After two decades of technical research, MIRI announced a major strategy pivot in 2024, moving away from alignment research toward policy advocacy aimed at halting the development of increasingly general AI models [MIRI About](https://intelligence.org/about/). This shift came after the organization acknowledged that its primary research initiative had "largely failed" [MIRI 2024 Update](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/).
With approximately 42 employees [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917) and an interdisciplinary approach that deliberately hires from computer science, economics, mathematics, and philosophy backgrounds [Future of Life Institute](https://futureoflife.org/ai/ai-the-danger-of-good-intentions/), MIRI aligns itself with the principles and objectives of the effective altruism movement [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute).
## History
### Founding and Early Years (2000-2005)
MIRI was established in 2000 with a paradoxical original mission: accelerating AI development. The organization operated under this goal until 2005, when founder Eliezer Yudkowsky's concerns about superintelligent AI risks prompted a fundamental reorientation toward AI safety [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute). That same year, the organization relocated from Atlanta to Silicon Valley [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute), positioning itself at the heart of the technology industry.
### Singularity Summit Era (2006-2012)
Beginning in 2006, MIRI organized the annual Singularity Summit to discuss AI's future and risks, initially in cooperation with Stanford University and with funding from Peter Thiel [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute). These summits became prominent venues for discussing the implications of advanced artificial intelligence and helped raise awareness of AI safety concerns within both academic and technology communities.
In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute), marking the end of this public outreach phase. The following month, in January 2013, the organization adopted its current name: Machine Intelligence Research Institute [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute).
### Technical Research Focus (2012-2020)
During this period, MIRI pursued an ambitious agenda focused on mathematical foundations of AI safety. The organization published actively on topics including logical uncertainty and probabilistic reasoning, decision theory and <EntityLink id="E584">agent foundations</EntityLink>, <EntityLink id="E439">AI alignment</EntityLink> and value learning, corrigibility and interruptibility, <EntityLink id="E483">formal verification</EntityLink> of AI systems, and mathematical foundations of safe AI [MIRI Publications](https://intelligence.org/all-publications/).
MIRI received significant funding during this era. <EntityLink id="E552">Coefficient Giving</EntityLink> provided \$2,652,500 over two years in February 2019 for general support, increasing their annual support from \$1.4 million in 2018 to \$2.31 million in 2019 [Coefficient Giving](https://www.openphilanthropy.org/grants/machine-intelligence-research-institute-general-support-2019/). In April 2020, Coefficient Giving awarded MIRI its largest grant to date: \$7,703,750, with \$6.24 million from Coefficient Giving's main funders and \$1.46 million from a partnership with BitMEX co-founder Ben Delo [Coefficient Giving](https://www.openphilanthropy.org/grants/machine-intelligence-research-institute-general-support-2019/). At this peak, Coefficient Giving was providing approximately 60% of MIRI's predicted budgets for 2020-2021 [Coefficient Giving](https://www.openphilanthropy.org/grants/machine-intelligence-research-institute-general-support-2019/).
However, the organization experienced a dramatic revenue spike to \$25.6 million in 2021 [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917), partly due to a donation of several million dollars worth of Ethereum from Vitalik Buterin [Coefficient Giving](https://www.openphilanthropy.org/grants/machine-intelligence-research-institute-general-support-2019/).
### Strategic Collapse and Pivot (2020-Present)
The 2020 update revealed a critical turning point: MIRI's primary research initiative had "largely failed," prompting years of regrouping [MIRI 2024 Update](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/). By 2021, MIRI announced a reduced emphasis on technical research in favor of advocacy and policy influence, citing diminishing returns on <EntityLink id="E19">alignment progress</EntityLink>. This led to near-zero new publications from core researchers between 2018 and 2022 [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics).
The organization also became "more pessimistic that such work will have time to bear fruit" regarding technical alignment research without policy interventions [MIRI 2024 Update](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/). This assessment culminated in the 2024 announcement of a major strategy pivot away from alignment research entirely, with MIRI's current focus now on attempting to halt the development of increasingly general AI models via discussions with policymakers about the extreme risks artificial superintelligence poses [MIRI About](https://intelligence.org/about/).
## Current Operations
### Organizational Structure
MIRI operates as a 501(c)(3) nonprofit with approximately 42 employees [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917). The organization deliberately hires from diverse backgrounds including computer science, economics, mathematics, and philosophy, recognizing that AI safety requires interdisciplinary perspectives [Future of Life Institute](https://futureoflife.org/ai/ai-the-danger-of-good-intentions/).
The leadership team includes:
- **Eliezer Yudkowsky** - Chair and Head Researcher (\$599,970 compensation in 2024) [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917)
- **Malo Bourgon** - CEO (\$241,531 compensation in 2024) [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917)
- **Nate Soares** - President (\$236,614 compensation in 2024) [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917)
- **Scott Garrabrant** - Employee (\$296,735 compensation in 2024) [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917)
- **Benya Fallenstein** - Research Fellow (\$239,947 compensation in 2024) [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917)
### Financial Position
MIRI's financial situation has deteriorated significantly from its 2021 peak. The organization reported \$1,534,913 in total revenue for 2024, while expenses reached \$6,508,701, resulting in a net loss of \$4,973,788 [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917). Despite this deficit, MIRI maintains \$16,493,789 in total assets and \$15,242,215 in net assets [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917), providing approximately two years of operational runway [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917).
Executive compensation represented \$3,132,826, or 48.1% of total expenses in 2024 [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917). The organization projected spending \$5.6 million in 2024 and expects expenses of \$6.5 million to \$7 million in 2025 [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917).
### Current Strategy
MIRI's current focus is on attempting to halt the development of increasingly general AI models through discussions with policymakers about the extreme risks artificial superintelligence poses [MIRI About](https://intelligence.org/about/). This represents a dramatic departure from the organization's historical emphasis on technical alignment research.
The organization acknowledges the pessimistic nature of this approach, stating that policy efforts are "very unlikely to save us, but all other plans we know of seem even less likely to succeed" [MIRI 2024 Update](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/). This reflects a belief that alignment research is "extremely unlikely to succeed in time to prevent an unprecedented catastrophe" [MIRI About](https://intelligence.org/about/).
## Research Legacy
### Publication Timeline
MIRI's research output followed a clear trajectory. Between 2012 and 2016, the organization actively published on topics like logical uncertainty, decision theory, and AI alignment [MIRI Publications](https://intelligence.org/all-publications/). However, from 2018 to 2022, core researchers produced near-zero new publications [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics), reflecting the organization's acknowledgment that its foundational bet on mathematical formalization had underdelivered relative to capability advances [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics).
### Research Areas
MIRI's technical work focused on six core areas, all aimed at developing mathematical foundations for safe artificial intelligence:
1. **Logical uncertainty and probabilistic reasoning** - Developing frameworks for reasoning under logical uncertainty [MIRI Publications](https://intelligence.org/all-publications/)
2. **Decision theory and agent foundations** - Theoretical work on how rational agents should make decisions [MIRI Publications](https://intelligence.org/all-publications/)
3. **AI alignment and value learning** - Methods for ensuring AI systems pursue intended goals [MIRI Publications](https://intelligence.org/all-publications/)
4. **Corrigibility and interruptibility** - Designing systems that can be safely modified or shut down [MIRI Publications](https://intelligence.org/all-publications/)
5. **Formal verification of AI systems** - Mathematical proofs of system properties [MIRI Publications](https://intelligence.org/all-publications/)
6. **Mathematical foundations of safe AI** - Fundamental theoretical work underlying safety approaches [MIRI Publications](https://intelligence.org/all-publications/)
### Assessment of Research Impact
Eliezer Yudkowsky assessed that "the gameboard looks 'incredibly grim' to him, because from his perspective the field has made almost no progress on the alignment problem" [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics). This pessimistic evaluation reflects MIRI's acknowledgment that its foundational bet on mathematical formalization had underdelivered relative to capability advances [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics).
Despite these internal assessments, MIRI received recognition as a recommended charity from Raising for Effective Giving, which cited the organization's impact potential in preventing "vast amounts of future suffering," the funding gap for AI safety work, and its effective methodology with historical precedent in computer science foundations [Raising for Effective Giving](https://reg-charity.org/recommended-charities/machine-intelligence-research-institute/).
## Criticisms and Controversies
### Organizational Criticisms
MIRI has faced allegations of cult-like dynamics, with critics claiming that "MIRI and LW [are] just an Eliezer-worshipping cult" [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics). A <EntityLink id="E538">LessWrong</EntityLink> compilation of MIRI criticisms identified <EntityLink id="E156">Holden Karnofsky</EntityLink>'s critique as "the best criticism of MIRI as an organisation" [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics).
Even major funders expressed reservations. Coefficient Giving's 2016 evaluation included significant concerns about MIRI's Agent Foundations research agenda, though they continued supporting the organization for other reasons [Coefficient Giving](https://www.openphilanthropy.org/grants/machine-intelligence-research-institute-general-support-2019/).
### Research Methodology Critiques
Critics have questioned whether theoretical work can be done so far in advance of testing and experimentation [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics). This challenge to MIRI's highly theoretical approach proved prescient, as the organization itself later acknowledged that its primary research initiative had "largely failed" [MIRI 2024 Update](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/).
### Technical Disagreements
Several core technical assumptions have faced criticism:
**Generalization thesis vagueness**: Critics challenge MIRI's "generalization thesis" as "unsatisfyingly vague" - the idea that smart systems with goal-directedness markers will pick up dangerous varieties through generalization [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics).
**Goal-directedness concept**: Critics note that "whether coding assistants are less 'generally goal-directed' than a hypothetical machine that manipulates users... is actually theoretically undecidable," questioning whether goal-directedness will cause the behaviors MIRI worries about [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics).
### Policy Position Critiques
MIRI's current advocacy for shutting down AI research has drawn significant pushback. Critics argue this position "would obviously be very difficult, and very damaging (because we don't get the benefits of AI for all time it's shut down)" [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics).
## Influence and Legacy
MIRI was among the first organizations to recognize the future invention of artificial superintelligence as the most important and potentially catastrophic event in the twenty-first century [MIRI About](https://intelligence.org/about/). This early recognition helped establish AI safety as a legitimate field of study and influenced the development of alignment research at major AI laboratories.
The organization's alignment with effective altruism principles [Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute) and its role in hosting the Singularity Summit contributed to raising awareness of AI safety concerns among philanthropists, researchers, and policymakers. MIRI's work influenced the emergence of AI safety as a funded research area, even as its own technical research program ultimately failed to achieve its goals.
The organization's trajectory - from pioneering AI safety work to acknowledging research failure and pivoting to policy advocacy - represents a cautionary case study in the challenges of theoretical safety research conducted far in advance of the systems it aims to protect against.
## Sources and Further Reading
### Primary Sources
- [MIRI About Page](https://intelligence.org/about/) - Official organizational overview
- [MIRI 2024 Mission and Strategy Update](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/) - Announcement of policy pivot
- [All MIRI Publications](https://intelligence.org/all-publications/) - Complete research output
### Financial and Organizational Information
- [MIRI on ProPublica Nonprofit Explorer](https://projects.propublica.org/nonprofits/organizations/582565917) - Detailed financial filings
- [Coefficient Giving - MIRI General Support (2019)](https://www.openphilanthropy.org/grants/machine-intelligence-research-institute-general-support-2019/) - Major grant details
### Critical Analyses
- [Steelmanning MIRI Critics - LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics) - Compilation of major criticisms
- [MIRI: The Danger of Good Intentions - Future of Life Institute](https://futureoflife.org/ai/ai-the-danger-of-good-intentions/) - Profile and assessment
### General Background
- [Machine Intelligence Research Institute - Wikipedia](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute) - Historical overview
- [MIRI - Raising for Effective Giving](https://reg-charity.org/recommended-charities/machine-intelligence-research-institute/) - Charity evaluation