Where I Am Donating in 2025
webAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
A personal donation strategy post from a LessWrong community member; useful as a data point on how EA-aligned individuals prioritize AI safety interventions in 2025, particularly the policy/advocacy vs. technical safety debate.
Forum Post Details
Metadata
Summary
Michael Dickens outlines his 2025 charitable giving strategy, prioritizing AI existential risk mitigation through policy and advocacy over technical safety research. He argues for a global moratorium on superintelligent AI development and supports intermediate steps like compute monitoring and safety regulations as neglected, high-impact interventions.
Key Points
- •AI misalignment is identified as the most significant existential threat, warranting prioritization over other causes.
- •Advocates for a global moratorium on superintelligent AI development until safety is demonstrated, with intermediate policy steps.
- •Policy and advocacy work is considered more neglected than technical AI safety research, making it higher marginal value.
- •Cause prioritization is largely unchanged from 2024, reflecting stable views on the AI risk landscape.
- •Compute monitoring and safety regulations are highlighted as tractable near-term levers for reducing existential risk.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| ControlAI | Organization | 63.0 |
Cached Content Preview
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Where I Am Donating in 2025 — LessWrong World Optimization AI Frontpage 32
Where I Am Donating in 2025
by MichaelDickens 28th Nov 2025 17 min read 2 32
Last year I gave my reasoning on cause prioritization and did shallow reviews of some relevant orgs. I'm doing it again this year.
Cross-posted to my website .
Cause prioritization
In September, I published a report on the AI safety landscape, specifically focusing on AI x-risk policy/advocacy.
The prioritization section of the report explains why I focused on AI policy. It's similar to what I wrote about prioritization in my 2024 donations post , but more fleshed out. I won't go into detail on cause prioritization in this post because those two previous articles explain my thinking.
My high-level prioritization is mostly unchanged since last year. In short:
Existential risk is a big deal.
AI misalignment risk is the biggest existential risk.
Within AI x-risk, policy/advocacy is much more neglected than technical research.
In the rest of this section, I will cover:
What I want to achieve with my donations
How I've changed my mind since last year
How my confidence has increased since last year
What I want my donations to achieve
By donating, I want to increase the chances that we get a global ban on developing superintelligent AI until it is proven safe.
"The Problem" is my favorite article-length explanation of why AI misalignment is a big deal. For a longer take, I also like MIRI's book .
MIRI says:
On our view, the international community’s top immediate priority should be creating an “off switch” for frontier AI development. By “creating an off switch”, we mean putting in place the systems and infrastructure necessary to either shut down frontier AI projects or enact a general ban.
I agree with this. At some point, we will probably need a halt on frontier AI development, or else we will face an unacceptably high risk of extinction. And that time might arrive soon, so we need to start working on it now.
This Google Doc that explains why I believe a moratorium on frontier AI development is better than "softer" safety regulations. In short: no one knows how to write AI safety regulations that prevent us from dying. If we knew how to do that, then I'd want it; but since we don't, the best outcome is to not build superintelligent AI until we know how to prevent it from killing everyone.
That said, I still support efforts to implement AI safety regulations, and I think that sort of work is among the best things one can be doing, because:
My best guess is that soft safety regulations won't prevent extinction, but I could be wrong about that—they might turn out to work.
Some kinds of safety regulations are relatively easy to implement and would be a net improvement.
... (truncated, 31 KB total)f89cb6d75d198729 | Stable ID: NWIzYmU4Ym