Skip to content
Longterm Wiki
Back

2025 review in AI & Society

paper

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Springer

Relevant to AI safety researchers and practitioners concerned with human-AI teaming failures; challenges the assumption that XAI tools reliably improve human oversight and decision-making in high-stakes settings.

Metadata

Importance: 68/100journal articleanalysis

Summary

This systematic review of 35 studies challenges the view that automation bias stems solely from over-trust, identifying multiple interacting factors including AI literacy, expertise, and cognitive profiles. Notably, it finds that Explainable AI and transparency mechanisms frequently fail to reduce automation bias or improve decision accuracy. The authors argue that designs promoting active user verification are more effective interventions than explanations alone.

Key Points

  • Automation bias is driven by multiple interacting factors—AI literacy, professional expertise, cognitive profiles, and trust dynamics—not just over-trust or attention failures.
  • XAI and transparency mechanisms improve perceived acceptability but often fail to reduce automation bias or improve actual decision accuracy.
  • User engagement and critical independent verification are identified as more effective interventions than explanations alone.
  • Findings have significant implications for high-stakes domains like healthcare and law where over-reliance on AI can cause serious harm.
  • Recommends adaptive explanation designs that actively prompt users to verify AI recommendations rather than passively present rationales.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 15, 202696 KB
Exploring automation bias in human–AI collaboration: a review and implications for explainable AI | AI & SOCIETY | Springer Nature Link 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 

 

 
 
 
 
 
 

 

 
 
 
 
 

 

 

 

 
 
 
 
 
 

 
 
 
 

 

 

 
 

 

 

 

 

 

 
 
 
 
 
 

 
 
 
 
 

 
 
 
 

 

 
 
 
 

 
 
 
 
 
 

 
 
 

 
 
 
 

 

 
 
 Skip to main content 

 
 
 
 
 
 Advertisement

 
 
 
 
 
 
 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 
 

 Exploring automation bias in human–AI collaboration: a review and implications for explainable AI

 
 
 Open Forum

 
 
 Open access 
 

 
 

 
 Published: 03 July 2025 
 

 
 
 
 Volume 41 , pages 259–278, ( 2026 )
 

 
 Cite this article 
 

 

 
 
 
 You have full access to this open access article

 
 
 
 
 
 
 
 Download PDF 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Save article 
 
 
 
 
 View saved research 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 AI & SOCIETY 
 
 
 
 Aims and scope
 
 
 
 
 
 Submit manuscript
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Exploring automation bias in human–AI collaboration: a review and implications for explainable AI
 
 
 
 
 
 
 
 
 Download PDF 
 
 
 
 
 
 

 
 
 
 
 
 

 
 
 

 

 
 Abstract

 As Artificial Intelligence (AI) becomes increasingly embedded in high-stakes domains such as healthcare, law, and public administration, automation bias (AB)—the tendency to over-rely on automated recommendations—has emerged as a critical challenge in human–AI collaboration. While previous reviews have examined AB in traditional computer-assisted decision-making, research on its implications in modern AI-driven work environments remains limited. To address this gap, this research systematically investigates how AB manifests in these settings and the cognitive mechanisms that influence it. Following PRISMA 2020 guidelines, we reviewed 35 peer-reviewed studies from SCOPUS, ScienceDirect, PubMed, and Google Scholar. The included literature, published between January 2015 and April 2025, spans fields such as cognitive psychology, human factors engineering, human–computer interaction, and neuroscience, providing an interdisciplinary foundation for our analysis. Traditional perspectives attribute AB to over-trust in automation or attentional constraints, resulting in users perceiving AI-generated outputs as reliable. However, our review presents a more nuanced view. While confirming some prior findings, it also sheds light on additional interacting factors such as, AI literacy, level of professional expertise, cognitive profile, developmental trust dynamics, task verification demands, and explanation complexity. Notably, although Explainable AI (XAI) and transparency mechanisms are designed to mitigate AB, overly technical, cognitively demanding, or even simplisti

... (truncated, 96 KB total)
Resource ID: a96cbf6f98644f2f | Stable ID: ODZmOGExNW