Skip to content
Longterm Wiki
Back

Melanie Mitchell: Why AI Is Harder Than We Think

paper

Author

Melanie Mitchell

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A widely-cited critical analysis of overconfidence in AI progress by cognitive scientist Melanie Mitchell; useful for grounding discussions of AI timelines, capability evaluation, and the gap between narrow AI performance and general intelligence.

Paper Details

Citations
126
7 influential
Year
2021

Metadata

Importance: 72/100arxiv preprintanalysis

Abstract

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.

Summary

Melanie Mitchell argues that AI progress has repeatedly been derailed by four fallacies about the nature of intelligence, leading researchers to underestimate the difficulty of achieving general AI. The paper examines historical overconfidence in AI timelines and capabilities, diagnosing systematic conceptual errors including conflating narrow task performance with general intelligence and underestimating the complexity of human cognition.

Key Points

  • Identifies four recurring fallacies that cause AI researchers to overestimate progress: narrow tasks as proxies for general intelligence, overlooking the complexity of human intuition and common sense.
  • Historical AI boom-bust cycles stem from consistent underestimation of how hard human-level cognition is to replicate in machines.
  • Performance on benchmarks is often mistaken for broader understanding, leading to inflated claims about AI capabilities.
  • Human intelligence involves embodiment, cultural context, and flexible abstraction that current AI systems fundamentally lack.
  • Calls for more epistemic humility in AI research and caution about near-term AGI predictions.

Cited by 1 page

PageTypeQuality
Long-Timelines Technical WorldviewConcept91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202653 KB
# Why AI is Harder Than We Think

Melanie Mitchell

Santa Fe Institute

Santa Fe, NM, USA

mm@santafe.edu

Abstract

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.

## Introduction

The year 2020 was supposed to herald the arrival of self-driving cars. Five years earlier, a headline in The Guardian predicted that “From 2020 you will become a permanent backseat driver” \[ [1](https://ar5iv.labs.arxiv.org/html/2104.12871#bib.bib1 "")\]. In 2016 Business Insider assured us that “10 million self-driving cars will be on the road by 2020” \[ [2](https://ar5iv.labs.arxiv.org/html/2104.12871#bib.bib2 "")\]. Tesla Motors CEO Elon Musk promised in 2019 that “A year from now, we’ll have over a million cars with full self-driving, software…everything” \[ [3](https://ar5iv.labs.arxiv.org/html/2104.12871#bib.bib3 "")\]. And 2020 was the target announced by several automobile companies to bring self-driving cars to market \[ [4](https://ar5iv.labs.arxiv.org/html/2104.12871#bib.bib4 ""), [5](https://ar5iv.labs.arxiv.org/html/2104.12871#bib.bib5 ""), [6](https://ar5iv.labs.arxiv.org/html/2104.12871#bib.bib6 "")\].

Despite attempts to redefine “full self-driving” into existence \[ [7](https://ar5iv.labs.arxiv.org/html/2104.12871#bib.bib7 "")\], none of these predictions has come true. It’s worth quoting AI expert Drew McDermott on what can happen when over-optimism about AI systems—in particular, self-driving cars—turns out to be wrong:

> Perhaps expectations are too high, and… this will eventually result in disaster. \[S\]uppose that five years from now \[funding\] collapses miserably as autonomous vehicles fail to roll. Every startup company fails. And there’s a big backlash so that you can’t get money for anything connected with AI. Everybody hurriedly changes the names of their research projects to something else. This condition \[is\] called the “AI Winter” \[ [8](https://ar5iv.labs.arxiv.org/html/2104.12871#bib.bib8 "")\].

What’s most notable is that McDermott’s warning is from 1984, when, like today, the field of AI was awash with confident optimism about the near future of machine intelligence

... (truncated, 53 KB total)
Resource ID: 18b8993fb1bc6f99 | Stable ID: NTA3MGU3ZW