Skip to content
Longterm Wiki
Back

MIRI Technical Reports

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: MIRI

MIRI's technical reports are foundational references for theoretical AI alignment research, particularly for those interested in agent foundations and decision theory; some papers like 'Logical Induction' are widely cited across the broader AI safety community.

Metadata

Importance: 72/100organizational reporthomepage

Summary

The Machine Intelligence Research Institute (MIRI) technical reports page hosts a collection of formal research papers and technical documents focused on the mathematical and theoretical foundations of AI alignment. These reports cover topics such as decision theory, logical uncertainty, agent foundations, and corrigibility. The collection represents MIRI's core research output aimed at solving fundamental problems in building safe and aligned AI systems.

Key Points

  • Houses MIRI's primary research output on agent foundations, decision theory, and mathematical AI alignment approaches
  • Includes seminal works on topics like Löbian obstacles, logical induction, and updateless decision theory
  • Represents a formal, mathematics-first approach to AI safety rather than empirical or engineering-focused methods
  • Many reports address foundational problems in building agents that behave safely under self-reference and uncertainty
  • Serves as a key reference for researchers working on theoretical AI alignment and agent design

Cited by 1 page

PageTypeQuality
AI Risk Interaction Network ModelAnalysis64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20260 KB
[Skip to content](https://intelligence.org/technical-reports/#content)

# Not Found (Error 404)

## Page Not Found

Sorry, but we can’t find what you were looking for.
Resource ID: c36ff7b8236cc941 | Stable ID: NzA3ZmFiNm