Skip to content
Longterm Wiki
Back

Bansal et al. (2021)

paper

Authors

Zana Buçinca·Maja Barbara Malaya·Krzysztof Z. Gajos

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Empirical study examining human overreliance on AI-powered decision support systems and the ineffectiveness of explanations in mitigating this behavior, directly relevant to AI safety concerns about human-AI interaction and appropriate reliance calibration.

Paper Details

Citations
481
13 influential
Year
2021

Metadata

arxiv preprintprimary source

Abstract

People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI's suggestion even when that suggestion is wrong. Adding explanations to the AI decisions does not appear to reduce the overreliance and some studies suggest that it might even increase it. Informed by the dual-process theory of cognition, we posit that people rarely engage analytically with each individual AI recommendation and explanation, and instead develop general heuristics about whether and when to follow the AI suggestions. Building on prior research on medical decision-making, we designed three cognitive forcing interventions to compel people to engage more thoughtfully with the AI-generated explanations. We conducted an experiment (N=199), in which we compared our three cognitive forcing designs to two simple explainable AI approaches and to a no-AI baseline. The results demonstrate that cognitive forcing significantly reduced overreliance compared to the simple explainable AI approaches. However, there was a trade-off: people assigned the least favorable subjective ratings to the designs that reduced the overreliance the most. To audit our work for intervention-generated inequalities, we investigated whether our interventions benefited equally people with different levels of Need for Cognition (i.e., motivation to engage in effortful mental activities). Our results show that, on average, cognitive forcing interventions benefited participants higher in Need for Cognition more. Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.

Summary

This paper addresses the problem of overreliance on AI decision support systems, where users accept AI suggestions even when incorrect. The authors find that simple explanations do not reduce overreliance and may increase it. They propose three cognitive forcing interventions designed to compel users to engage more thoughtfully with AI explanations, drawing on dual-process theory and medical decision-making research. In an experiment with 199 participants, cognitive forcing significantly reduced overreliance compared to simple explainable AI approaches, though users rated these interventions less favorably. Importantly, the interventions benefited participants with higher Need for Cognition more, suggesting that individual differences in cognitive motivation moderate the effectiveness of explainable AI solutions.

Cited by 1 page

PageTypeQuality
AI-Human Hybrid SystemsApproach91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
# To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making

Zana Buçinca
[zbucinca@seas.harvard.edu](mailto:zbucinca@seas.harvard.edu)Harvard University33 Oxford St.CambridgeMA02138USA, Maja Barbara Malaya
[217723@edu.p.lodz.pl](mailto:217723@edu.p.lodz.pl)Lodz University of Technologyul. Stefana Żeromskiego 11690-924ŁódźPoland and Krzysztof Z. Gajos
[kgajos@eecs.harvard.edu](mailto:kgajos@eecs.harvard.edu)Harvard University33 Oxford St.CambridgeMA02138USA

(October 2020; January 2021; January 2021)

###### Abstract.

People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI’s suggestion even when that suggestion is wrong. Adding explanations to the AI decisions does not appear to reduce the overreliance and some studies suggest that it might even increase it.
Informed by the dual-process theory of cognition, we posit that people rarely engage analytically with each individual AI recommendation and explanation, and instead develop general heuristics about whether and when to follow the AI suggestions. Building on prior research on medical decision-making, we designed three cognitive forcing interventions to compel people to engage more thoughtfully with the AI-generated explanations.
We conducted an experiment (N=199), in which we compared our three cognitive forcing designs to two simple explainable AI approaches and to a no-AI baseline. The results demonstrate that cognitive forcing significantly reduced overreliance compared to the simple explainable AI approaches. However, there was a trade-off: people assigned the least favorable subjective ratings to the designs that reduced the overreliance the most. To audit our work for intervention-generated inequalities, we investigated whether our interventions benefited equally people with different levels of Need for Cognition (i.e., motivation to engage in effortful mental activities). Our results show that, on average, cognitive forcing interventions benefited participants higher in Need for Cognition more.
Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.

explanations; artificial intelligence; trust; cognition

††booktitle: Woodstock ’18: ACM Symposium on Neural Gaze Detection,
June 03–05, 2018, Woodstock, NY††copyright: acmlicensed††journal: PACMHCI††journalyear: 2021††journalvolume: 5††journalnumber: CSCW1††article: 188††publicationmonth: 4††price: 15.00††doi: 10.1145/3449287††ccs: Human-centered computing Interaction design

## 1\. Introduction

From loan approval to disease diagnosis, humans are increasingly being assisted by artificially intelligent (AI) systems in decision-making tasks. By combining two types of intelligence, these emerging sociotechnical systems (i.e., human+AI teams) were expected to perform better than either people or AIs alone (Kamar
et al., [2012](https://ar5iv.labs.arxiv.org/html/2102.09692#bib.bib37 ""); Kam

... (truncated, 98 KB total)
Resource ID: fa89fdbc996108aa | Stable ID: Y2U4MDEyN2