Skip to content
Longterm Wiki
All Source Checks
Citation

David Sacks - Footnote 52

partial85% confidence

1 evidence check

Last checked: 4/3/2026

The claim that the EA Forum criticized Sacks is not directly supported by the article. The article mentions Matthew Adelstein, who has a Substack on EA, disagreeing with Sacks and Weiss-Blatt's portrayal, but this is not the same as the EA Forum itself issuing a criticism. The claim that EA views AI risks as comparable to nuclear and biological threats is a slight oversimplification. The article mentions that longtermism prioritizes preventing existential risks such as pandemics, nuclear war, or rogue AI, but doesn't explicitly state that they are viewed as directly comparable.

Evidence — 1 source, 1 check

partial85%Haiku 4.5 · 4/3/2026
Found: He frames EA-funded AI safety research as a "doomer industrial complex" that manipulates public opinion, ignoring what the EA community views as genuine risks comparable to nuclear and biological thre

Note: The claim that the EA Forum criticized Sacks is not directly supported by the article. The article mentions Matthew Adelstein, who has a Substack on EA, disagreeing with Sacks and Weiss-Blatt's portrayal, but this is not the same as the EA Forum itself issuing a criticism. The claim that EA views AI risks as comparable to nuclear and biological threats is a slight oversimplification. The article mentions that longtermism prioritizes preventing existential risks such as pandemics, nuclear war, or rogue AI, but doesn't explicitly state that they are viewed as directly comparable.

Debug info

Record type: citation

Record ID: page:david-sacks:fn52