Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: MIRI

This is MIRI's official strategic update under new CEO Malo Bourgon, marking a significant organizational pivot away from technical research toward policy and communications advocacy, reflecting deep pessimism about near-term alignment solutions.

Metadata

Importance: 72/100organizational reportprimary source

Summary

MIRI's new CEO Malo Bourgon outlines a strategic shift in 2024, prioritizing policy advocacy and communications over technical research, driven by extreme pessimism about solving AI alignment in time to prevent human extinction. MIRI now focuses on pushing for international governmental agreements to halt progress toward smarter-than-human AI, while maintaining a reduced research portfolio.

Key Points

  • MIRI considers it very unlikely the alignment field can make sufficient progress before loss of control to smarter-than-human AI causes human extinction.
  • ChatGPT's release and the resulting Overton window shift give MIRI cautious hope that meaningful government regulation could forestall dangerous AI development.
  • Three strategic priorities: (1) policy/international agreements to halt AGI progress, (2) communications to normalize extinction-risk discourse, (3) continued but deprioritized research.
  • Policy and communications are now explicitly higher priorities than technical alignment research for MIRI going forward.
  • MIRI views technical research as increasingly contingent on policy success buying researchers more time.

Cited by 4 pages

Cached Content Preview

HTTP 200Fetched Feb 23, 202622 KB
[Skip to content](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/#content)

# MIRI 2024 Mission and Strategy Update

- [January 4, 2024](https://intelligence.org/2024/01/04/)
- [Malo Bourgon](https://intelligence.org/author/malo/)

As we [announced](https://intelligence.org/2023/10/10/announcing-miris-new-ceo-and-leadership-team/) back in October, I have taken on the senior leadership role at MIRI as its CEO. It’s a big pair of shoes to fill, and an awesome responsibility that I’m honored to take on.

There have been several changes at MIRI since [our 2020 strategic update](https://intelligence.org/2020/12/21/2020-updates-and-strategy/), so let’s get into it. [1](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/#fn1)

* * *

**The short version:**

We think it’s very unlikely that the AI alignment field will be able to make progress quickly enough to prevent human extinction and the loss of the future’s potential value, that we expect will result from loss of control to smarter-than-human AI systems.

However, developments this past year like the release of ChatGPT seem to have shifted the [Overton window](https://en.wikipedia.org/wiki/Overton_window) in a lot of groups. There’s been a lot more discussion of extinction risk from AI, including among policymakers, and the discussion quality seems greatly improved.

This provides a glimmer of hope. While we expect that more shifts in public opinion are necessary before the world takes actions that sufficiently change its course, it now appears more likely that governments could enact meaningful regulations to forestall the development of unaligned, smarter-than-human AI systems. It also seems more possible that humanity could take on a new megaproject squarely aimed at ending the acute risk period.

As such, in 2023, MIRI shifted its strategy to pursue three objectives:

1. **Policy:** Increase the probability that the major governments of the world end up coming to some international agreement to halt progress toward smarter-than-human AI, until humanity’s state of knowledge and justified confidence about its understanding of relevant phenomena has drastically changed; and until we are able to secure these systems such that they can’t fall into the hands of malicious or incautious actors. [2](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/#fn2)
2. **Communications:** Share our models of the situation with a broad audience, especially in cases where talking about an important consideration could help normalize discussion of it.
3. **Research:** Continue to invest in a portfolio of research. This includes technical alignment research (though we’ve become more pessimistic that such work will have time to bear fruit if policy interventions fail to buy the research field more time), as well as research in support of our policy and communications goals. [3](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-upd

... (truncated, 22 KB total)
Resource ID: 435b669c11e07d8f | Stable ID: Y2RjMjE1ZT