Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: MIRI

This fundraiser page documents MIRI's strategic shift away from technical alignment research toward public advocacy and governance, marking a significant organizational repositioning; useful context for understanding MIRI's current role in the AI safety ecosystem.

Metadata

Importance: 42/100press releasenews

Summary

MIRI launched its first fundraiser in six years in December 2025, targeting $6M to fund its pivot from technical AI alignment research to public advocacy and policy work focused on superintelligence risk. The fundraiser raised ~$1.6M in donations plus $1.6M in matching funds (totaling ~$3.2M, 53.7% of goal). Plans for 2026 include expanding communications and governance teams to influence policymakers and promote international coordination to halt the AI race.

Key Points

  • MIRI raised ~$3.2M total (donations + matching) against a $6M target, concluding its first fundraiser since 2019.
  • MIRI has formally pivoted from technical alignment research (~2000–2022) to public alarm-raising, advocacy, and AI governance work.
  • Key 2025 outputs include the NYT bestseller 'If Anyone Builds It, Everyone Dies' by Nate Soares and Eliezer Yudkowsky, plus a draft international agreement on superintelligence.
  • MIRI's core position: building superintelligence with current techniques will cause human extinction, and an international race-stopping agreement is necessary.
  • 2026 plans involve hiring for communications and governance teams and running experiments to mobilize public and policymaker response.

Cited by 1 page

PageTypeQuality
AI Alignment Research AgendasCrux69.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202617 KB
MIRI's 2025 Fundraiser - Machine Intelligence Research Institute 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 Skip to content 

 
 
 
 
 
 
 
 
 
 
 
 
 MIRI’s 2025 Fundraiser

 
 
 
 
 
 
 
 
 December 1, 2025 
 
 

 
 
 
 Alex Vermeer 
 
 

 
 
 
 
 
 Update: Our fundraiser has concluded. Many thanks to all our generous supporters for helping us raise just over $1.6M and secure all available matching funds! 

 MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to improve the conversation about superintelligence and help the world chart a viable path forward. 

 Donate Today

 

 

 
 
 Donations: $1,619,347 
 
 Matching: $1,600,000 
 
 Total: $3,219,347 
 (53.7%)
 

 
 
 
 
 $3,219,347 
 
 
 

 
 
 
 
 
 
 

 
 
 
 
 
 $0 
 
 
 
 
 $1.5M 
 
 
 
 
 $3M 
 
 
 
 
 $4.5M 
 
 
 
 
 $6M 
 
 

 
 
 
 
 Donations
 
 
 
 Matching
 
 

 
 
 Last updated: January 22, 2026 11:06 (PST)
 

 
 
 MIRI is a nonprofit with a goal of helping humanity make smart and sober decisions on the topic of smarter-than-human AI.

 Our main focus from 2000 to ~2022 was on technical research to try to make it possible to build such AIs without catastrophic outcomes. More recently, we’ve pivoted to raising an alarm about how the race to superintelligent AI has put humanity on course for disaster.

 In 2025, those efforts focused around Nate Soares and Eliezer Yudkowsky’s book (now a New York Times bestseller) If Anyone Builds It, Everyone Dies , with many public appearances by the authors; many conversations with policymakers; the release of an expansive online supplement to the book; and various technical governance publications, including a recent report with a draft of an international agreement of the kind that could actually address the danger of superintelligence.

 Millions have now viewed interviews and appearances with Eliezer and/or Nate, and the possibility of rogue superintelligence and core ideas like “ grown, not crafted ” are increasingly a part of the public discourse. But there is still a great deal to be done if the world is to respond to this issue effectively.

 In 2026, we plan to expand our efforts, hire more people, and try a range of experiments to alert people to the danger of superintelligence and help them make a difference.

 To support these efforts, we’ve set a fundraising target of $6M ($4.4M from donors plus 1:1 matching on the first $1.6M raised, thanks to a $1.6M matching grant ), with a stretch target of $10M ($8.4M from donors plus $1.6M matching).

 Donate here , or read on to learn more.

 
 The Big Picture

 As stated in If Anyone Builds It, Everyone Dies :

 If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely

... (truncated, 17 KB total)
Resource ID: 1cb8ff7053544e01 | Stable ID: N2NkOWNiMG