Skip to content
Longterm Wiki
Back

Building Machines That Learn and Think Like People

paper

Authors

Brenden M. Lake·Tomer D. Ullman·Joshua B. Tenenbaum·Samuel J. Gershman

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Foundational cognitive science paper comparing human and machine learning approaches, examining how deep learning systems diverge from human intelligence in crucial ways—relevant for understanding AI alignment and developing more human-like AI reasoning.

Paper Details

Citations
855
34 influential
Year
2016
Methodology
peer-reviewed
Categories
Behavioral and Brain Sciences

Metadata

arxiv preprintprimary source

Abstract

Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.

Summary

This paper argues that while deep neural networks have achieved impressive performance on tasks like object recognition and game-playing, they fundamentally differ from human intelligence in important ways. The authors review cognitive science research and propose that human-like AI systems must go beyond pattern recognition to: (1) build causal models supporting explanation and understanding, (2) ground learning in intuitive theories of physics and psychology, and (3) leverage compositionality and learning-to-learn for rapid knowledge acquisition and generalization. The paper advocates for combining neural network advances with more structured cognitive models to achieve truly human-like machine learning.

Cited by 1 page

PageTypeQuality
Long-Timelines Technical WorldviewConcept91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
# Building Machines That Learn and Think Like People

Brenden M. Lake,1 Tomer D. Ullman,2,4 Joshua B. Tenenbaum,2,4 and Samuel J. Gershman3,4

1Center for Data Science, New York University

2Department of Brain and Cognitive Sciences, MIT

3Department of Psychology and Center for Brain Science, Harvard University

4Center for Brains Minds and Machines

###### Abstract

Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.

## 1 Introduction

Artificial intelligence (AI) has been a story of booms and busts, yet by any traditional measure of success, the last few years have been marked by exceptional progress. Much of this progress has come from recent advances in “deep learning,” characterized by learning large neural-network-style models with multiple layers of representation. These models have achieved remarkable gains in many domains spanning object recognition, speech recognition, and control (LeCun et al., [2015](https://ar5iv.labs.arxiv.org/html/1604.00289#bib.bib147 ""); Schmidhuber, [2015](https://ar5iv.labs.arxiv.org/html/1604.00289#bib.bib231 "")). In object recognition, Krizhevsky et al. ( [2012](https://ar5iv.labs.arxiv.org/html/1604.00289#bib.bib136 "")) trained a deep convolutional neural network (convnets; LeCun et al., [1989](https://ar5iv.labs.arxiv.org/html/1604.00289#bib.bib148 "")) that nearly halved the error rate of the previous state-of-the-art on the most challenging benchmark to date. In the years since, convnets continue to dominate, recently approaching human-level performance on some object recognition benchmarks (Szegedy et al., [2014](https://ar5iv.labs.arxiv.org/html/1604.00289#bib.bib255 ""); Russakovsky et al., [2015](https://ar5iv.labs.arxiv.org/html/1604.00289#bib.bib219 ""); He et al., [2015](https://ar5

... (truncated, 98 KB total)
Resource ID: 89bacb66a99d0325 | Stable ID: YjFlZTJlMW