Skip to content
Longterm Wiki
Back

Minsky on AI risk in the 80s and 90s

web

Written by Luke Muehlhauser (former MIRI executive director), this post is valuable as historical documentation showing that AI risk concerns from a founding figure of AI predate the modern safety movement, useful for intellectual history discussions.

Metadata

Importance: 42/100blog postanalysis

Summary

Luke Muehlhauser documents and analyzes Marvin Minsky's statements about AI risk during the 1980s and 1990s, examining what one of AI's founding figures thought about the dangers of advanced AI systems. The post serves as a historical record showing that concerns about AI risk predate the modern AI safety movement.

Key Points

  • Documents Marvin Minsky's views on AI risk expressed decades before the modern AI safety field emerged
  • Provides historical evidence that prominent AI researchers recognized potential dangers of advanced AI early on
  • Helps contextualize the intellectual history of AI risk concern, countering the narrative that it is a recent or fringe idea
  • Minsky's status as an AI pioneer lends credibility to early risk concerns and shows the field's founders were not universally dismissive of dangers
  • Useful for understanding how AI safety thinking evolved from early pioneers to the modern alignment research community

Cited by 1 page

PageTypeQuality
Early Warnings EraHistorical31.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20263 KB
Follow-up to: [AI researchers on AI risk](http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/); [Fredkin on AI risk in 1979](https://lukemuehlhauser.com/fredkin-on-ai-risk-in-1979/).

Marvin Minsky is another AI scientist who has been thinking about AI risk for a long time, at least since the 1980s. Here he is in a 1983 afterword to Vinge’s novel _True Names_:[1](https://lukemuehlhauser.com/minsky-on-ai-risk-in-the-80s-and-90s/#footnote_1_1222 "An online copy of the afterword is available here, though has been slightly modified from the original. I am quoting from the original, which was written in 1983.")

> The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful… It will be tempting to do this, not just for the gain in power, but just to decrease our own human effort in the consideration and formulation of our own desires. If some genie offered you three wishes, would not your first one be, “Tell me, please, what is it that I want to the most!” The problem is that, with such powerful machines, it would require but the slightest powerful accident of careless design for them to place their goals ahead of ours, perhaps the well-meaning purpose of protecting us from ourselves, as in _With Folded Hands_, by Jack Williamson; or to protect us from an unsuspected enemy, as in _Colossus_ by D.H. Jones…

And according to Eric Drexler ( [2015](http://www.fhi.ox.ac.uk/wp-content/uploads/MDL-Intelligence-Distillation-for-safe-superintelligent-problem-solving1.pdf)), Minsky was making the [now-standard](http://www.nickbostrom.com/superintelligentwill.pdf) “dangerous-to-humans resource acquisition is a natural subgoal of almost any final goal” argument at least as early as 1990:

> My concerns regarding AI risk, which center on the challenges of long-term AI governance, date from the inception of my studies of advanced molecular technologies, _ca._ 1977\. I recall a later conversation with Marvin Minsky (they chairing my doctoral committee, _ca._ 1990) that sharpened my understanding of some of the crucial considerations: Regarding goal hierarchies, Marvin remarked that the high-level task of learning language is, for an infant, a _sub_ goal of getting a drink of water, and that converting the resources of the universe into computers is a potential subgoal of a machine attempting to play perfect chess.

1. An online copy of the afterword is available [here](https://web.media.mit.edu/~minsky/papers/TrueNames.Afterword.html), though has been slightly modified from the original. I am quoting from the original, which was written in 1983. \[ [↩](https://lukemuehlhauser.com/minsky-on-ai-risk-in-the-80s-and-90s/#identifier_1_1222)\]
Resource ID: fafa8b89b5212902 | Stable ID: Yzc5ZTIwNT