Racing Through a Minefield
blogAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Alignment Forum
A conceptual piece by Eliezer Yudkowsky on the Alignment Forum using superstimuli as an analogy for understanding how market-deployed AI systems may systematically optimize against genuine human welfare, relevant to AI deployment governance debates.
Metadata
Summary
Yudkowsky argues that competitive market incentives systematically drive the creation of 'superstimuli'—products engineered to exploit evolved preferences so intensely they override basic survival instincts. Without incentives aligned to genuine human welfare, markets will produce increasingly potent engagement-maximizing products that cause serious harm. This serves as a conceptual foundation for understanding misaligned AI deployment risks.
Key Points
- •Market competition rewards optimizing for engagement metrics rather than actual human welfare, creating a structural misalignment problem.
- •Superstimuli (candy, altered beauty images, addictive games) exploit evolved preferences far beyond their original adaptive context, sometimes fatally.
- •The core danger is that without countervailing incentives, competitive pressure will keep escalating the potency of harmful superstimuli.
- •This framing extends naturally to AI systems that could be optimized for engagement/metrics rather than genuine user or societal benefit.
- •Addresses the deployment problem: even well-understood harms get deployed because no individual actor has sufficient incentive to stop.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Doomer Worldview | Concept | 38.0 |
| Governance-Focused Worldview | Concept | 67.0 |
Cached Content Preview
[Superstimuli and the Collapse of Western Civilization](https://www.lesswrong.com/posts/Jq73GozjsuhdwMLEG/racing-through-a-minefield-the-ai-deployment-problem#)
5 min read
[The Simple Math of Evolution](https://www.lesswrong.com/s/MH2b8NfWv22dBtrs8)
[Superstimuli](https://www.lesswrong.com/w/superstimuli)[Akrasia](https://www.lesswrong.com/w/akrasia)[Evolution](https://www.lesswrong.com/w/evolution)[Rationality](https://www.lesswrong.com/w/rationality) [Frontpage](https://www.lesswrong.com/posts/5conQhfa4rgb4SaWx/site-guide-personal-blogposts-vs-frontpage-posts)
# 169
# [Superstimuli and the Collapse of WesternCivilization](https://www.lesswrong.com/posts/Jq73GozjsuhdwMLEG/superstimuli-and-the-collapse-of-western-civilization)
by [Eliezer Yudkowsky](https://www.lesswrong.com/users/eliezer_yudkowsky?from=post_header)
16th Mar 2007
5 min read
[90](https://www.lesswrong.com/posts/Jq73GozjsuhdwMLEG/racing-through-a-minefield-the-ai-deployment-problem#comments)
# 169
At [least](http://news.xinhuanet.com/english/2005-11/01/content_3714003.htm)[three](http://phuze.com/2007/02/28/chinese-gamer-dies-playing-online-games.html)[people](http://news.bbc.co.uk/2/hi/technology/4137782.stm) have died playing online games for days without rest. People have lost their spouses, jobs, and children to World of Warcraft. If people have the right to play video games - and it's hard to imagine a more fundamental right - [then the market is going to respond](https://www.lesswrong.com/lw/h0/burchs_law) by supplying the most _engaging_ video games that can be sold, to the point that exceptionally engaged consumers are removed from the gene pool.
How does a consumer product become so _involving_ that, after 57 hours of using the product, the consumer would rather use the product for one more hour than eat or sleep? (I suppose one could argue that the consumer makes a rational decision that they'd rather play Starcraft for the next hour than live out the rest of their lives, but let's just not go there. Please.)
A candy bar is a _superstimulus:_ it contains more concentrated sugar, salt, and fat than anything that exists in the ancestral environment. A candy bar matches taste buds that evolved in a hunter-gatherer environment, but it matches those taste buds much more strongly than anything that actually existed _in_ the hunter-gatherer environment. The signal that once reliably correlated to healthy food has been hijacked, blotted out with a point in tastespace that wasn't in the training dataset - an impossibly distant outlier on the old ancestral graphs. Tastiness, formerly representing the evolutionarily identified correlates of healthiness, has been reverse-engineered and perfectly matched with an artificial substance. Unfortunately there's no equally powerful market incentive to make the resulting food item as healthy as it is tasty. We can't taste healthfulness, after all.
The now-famous [Dove Evolution](http://www.youtube.com/watch?v=iYhCn0jf46
... (truncated, 72 KB total)a2cde6af5436a9fb | Stable ID: ZGNmNTU0OD