Skip to content
Longterm Wiki
Back

Machine Intelligence Research Institute

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: MIRI

MIRI is one of the pioneering AI safety research organizations, founded in 2000, and has significantly shaped the field's early research agenda around alignment and existential risk from advanced AI.

Metadata

Importance: 55/100homepage

Summary

The about page for the Machine Intelligence Research Institute (MIRI), a nonprofit research organization focused on ensuring that the development of smarter-than-human artificial intelligence is beneficial to humanity. MIRI conducts technical AI safety research aimed at making advanced AI systems more aligned with human values and intentions.

Key Points

  • MIRI is a nonprofit dedicated to ensuring advanced AI development goes well for humanity
  • The organization focuses on foundational technical research for AI alignment and safety
  • MIRI operates under the premise that smarter-than-human AI poses significant existential risks if not carefully developed
  • The institute is one of the earliest and most influential organizations in the AI safety research space
  • MIRI's work includes both technical research and public outreach on AI risk

Cited by 1 page

PageTypeQuality
Machine Intelligence Research InstituteOrganization50.0

6 FactBase facts citing this source

Cached Content Preview

HTTP 200Fetched Mar 20, 20266 KB
[Skip to content](https://intelligence.org/about/#content)

# About MIRI

The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit based in Berkeley, California. We do research and public outreach intended to help prevent human extinction from the development of artificial superintelligence (ASI).

[Learn more about the risks from advanced AI](https://intelligence.org/the-problem/)

## What we do

Founded more than 20 years ago, MIRI was among the first to recognize the future invention of artificial superintelligence as the most important—and potentially catastrophic—event in the twenty-first century. MIRI was the first organization to advocate for and work on ASI alignment as a technical problem, and has played a central role in building the field over the years.

Unfortunately, our efforts failed to prevent the current emergency. The alignment problem is not on track to be solved before the leading companies succeed in building smarter-than-human AI, and the default outcome is human extinction.

Our priority now is to use the lessons we’ve learned so far to inform the world about the situation and what needs to be done.

Extinction from AI is a live possibility, and the only reasonable response is to stop AI development altogether, until such a time as the alignment problem has been solved.

[Learn more about MIRI’s research](https://intelligence.org/research/)

## History

## 2000

Eliezer Yudkowsky, along with Brian and Sabine Atkins, founded MIRI as the Singularity Institute for Artificial Intelligence with the goal of accelerating progress towards smarter-than-human AI.

## 2000-2003

Yudkowsky [realized](https://www.lesswrong.com/posts/fLRPeXihRaiRo5dyX/the-magnitude-of-his-own-folly) that there would in fact be a problem of aligning smarter-than-human AI with human values.

MIRI pivoted to focus most research efforts on ASI alignment  (though the term “alignment” would only be coined over a decade later).

## 2006-2012

As part of his field-building efforts, Yudkowsky wrote a series of blog posts on topics including human reasoning, decision theory, morality, and AI. These writings led to the creation of the popular community blog [LessWrong](https://www.lesswrong.com/about), and were later [published as a book](https://intelligence.org/rationality-ai-zombies/).

MIRI co-founded and organized the [Singularity Summit](https://intelligence.org/singularitysummit/), an annual conference that covered topics such as AI, brain-computer interfaces, robotics, and regenerative medicine.

## 2013

The organization changed its name from the Singularity Institute to the Machine Intelligence Research Institute.

## 2013-2018

Research focused on [foundational mathematical problems relevant to AI alignment](https://intelligence.org/technical-agenda/).

MIRI ran a [series of workshops](https://intelligence.org/workshops/) which brought together researchers from a variety of backgrounds to work on problems related to AI alignment.

The

... (truncated, 6 KB total)
Resource ID: kb-2c16d126b367df5d | Stable ID: NTM4MWFkZW