Skip to content
Longterm Wiki
Back

Machine Intelligence Research Institute

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: MIRI

MIRI is a foundational organization in the AI safety ecosystem; its research agenda and publications have significantly shaped the field's early theoretical frameworks.

Metadata

Importance: 72/100homepage

Summary

MIRI is a nonprofit research organization focused on ensuring that advanced AI systems are safe and beneficial. It conducts technical research on the mathematical foundations of AI alignment, aiming to solve core theoretical problems before transformative AI is developed. MIRI is one of the pioneering organizations in the AI safety field.

Key Points

  • Focuses on foundational mathematical and theoretical research to make advanced AI systems aligned with human values
  • One of the earliest dedicated AI safety organizations, founded in 2000 (originally as SIAI)
  • Known for work on decision theory, logical uncertainty, and agent foundations research
  • Advocates for proactive safety research before powerful AI systems are deployed
  • Publishes technical papers and hosts events to advance the AI safety research community

Cited by 16 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 20263 KB
[Skip to content](https://intelligence.org/#content)

# The AI industry is racing toward a precipice.

## The default consequence of the creation of artificial superintelligence (ASI) is human extinction.

## Our survival depends on delaying the creation of ASI, as soon as we can, for as long as necessary.

[![](https://intelligence.org/wp-content/uploads/2025/12/isolatedBook-1.png)](https://ifanyonebuildsit.com/?ref=iojune)

[Available now](https://ifanyonebuildsit.com/?ref=iojune)

![](https://intelligence.org/wp-content/uploads/2025/12/button-curve-left.svg)

[Read more](https://intelligence.org/#read-more)

![](https://intelligence.org/wp-content/uploads/2025/12/button-curve-right.svg)

[Read more](https://intelligence.org/#our-view)

## Our view

For over two decades, the Machine Intelligence Research Institute (MIRI) has worked to understand and prepare for the critical challenges that humanity will face as it transitions to a world with artificial superintelligence. Our technical and philosophical work helped found the field of AI alignment, and our researchers originated many of the theories and concepts central to today’s discussions of AI.

Recent rapid advances in the frontier of artificial intelligence have dramatically shortened estimates of when superintelligence will arrive. Technical progress on safety, alignment, and control has failed to keep up. Humanity does not understand the internal workings of present systems well enough to completely control them or robustly steer their behaviors, let alone the far more powerful and complex systems expected in the coming years.

**If ASI is developed and deployed any time soon, by any nation or group, via anything remotely resembling current methods, the most likely outcome is human extinction.**

This is a bold claim, and we do not make it lightly. Many of the world’s experts (including some of those within organizations at the cutting edge) share our concerns. Yet progress toward ASI continues at a breakneck pace. The industry is racing forward; those actually building the systems of tomorrow cannot be relied upon to stop in time.

**Our survival depends on some form of globally coordinated and collectively enforced moratorium on the development of ASI, as soon as we can, for as long as necessary.**

[More about AI catastrophe](https://intelligence.org/the-problem/)

[**The Problem**\\
\\
Why do we think smarter-than-human AI poses such an enormous risk? What is the shape of the problem? \\
\\
Read more](https://intelligence.org/the-problem/)

[**The Briefing**\\
\\
A shorter introduction to AI catastrophe. \\
\\
Read more](https://intelligence.org/briefing)

[**Research**\\
\\
How should the state of the science inform next steps for policymakers?
 \\
\\
Read more](https://intelligence.org/research/)
Resource ID: 86df45a5f8a9bf6d | Stable ID: YmZjYzBhMW