Skip to content
Longterm Wiki
Back

Human performance in detecting deepfakes: A systematic review and meta-analysis

web

Authors

Alexander Diel·Tania Lalgi·Isabel Carolin Schröter·Karl F. MacDorman·Martin Teufel·Alexander Bäuerle

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: ScienceDirect

A systematic review and meta-analysis examining human ability to detect deepfakes, directly relevant to understanding vulnerabilities in human judgment against synthetic media threats and informing AI safety defenses.

Paper Details

Citations
27
Year
2024
Methodology
peer-reviewed
Categories
Computers in Human Behavior Reports

Metadata

journal articleanalysis

Cited by 6 pages

Cached Content Preview

HTTP 200Fetched Feb 27, 20264 KB
[Skip to main content](https://www.sciencedirect.com/science/article/pii/S2451958824001714#screen-reader-main-content) [Skip to article](https://www.sciencedirect.com/science/article/pii/S2451958824001714#screen-reader-main-title)

- View **PDF**
- Download full issue

Search ScienceDirect

[![Elsevier](https://www.sciencedirect.com/us-east-1/prod/6f5cfa673d01ebff42a3b8ea48adf3047e7aebc0/image/elsevier-non-solus.svg)](https://www.sciencedirect.com/journal/computers-in-human-behavior-reports "Go to Computers in Human Behavior Reports on ScienceDirect")

## [Computers in Human Behavior Reports](https://www.sciencedirect.com/journal/computers-in-human-behavior-reports "Go to Computers in Human Behavior Reports on ScienceDirect")

[Volume 16](https://www.sciencedirect.com/journal/computers-in-human-behavior-reports/vol/16/suppl/C "Go to table of contents for this volume/issue"), December 2024, 100538

[![Computers in Human Behavior Reports](https://ars.els-cdn.com/content/image/1-s2.0-S2451958824X00044-cov150h.gif)](https://www.sciencedirect.com/journal/computers-in-human-behavior-reports/vol/16/suppl/C)

# Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers

Author links open overlay panelAlexanderDielab, TaniaLalgiab, Isabel CarolinSchröterab, Karl F.MacDormanc, MartinTeufelab, AlexanderBäuerleab

Show more

Add to Mendeley

Cite

[https://doi.org/10.1016/j.chbr.2024.100538](https://doi.org/10.1016/j.chbr.2024.100538 "Persistent link using digital object identifier") [Get rights and content](https://s100.copyright.com/AppDispatchServlet?publisherName=ELS&contentID=S2451958824001714&orderBeanReset=true)

Under a Creative Commons [license](http://creativecommons.org/licenses/by-nc-nd/4.0/)

Open access

## Highlights

- •
Synthesized human deepfake detection is at chance for different modalities.

- •
Synthesized human deepfake detection is worse than detection of real stimuli.

- •
Strategies aimed to improve deepfake detection successfully increase performance.


## Abstract

_Deepfakes_ are AI-generated media designed to look real, often with the intent to deceive. Deepfakes threaten public and personal safety by facilitating disinformation, propaganda, and identity theft. Though research has been conducted on human performance in deepfake detection, the results have not yet been synthesized. This systematic review and meta-analysis investigates human deepfake detection accuracy. Searches in PubMed, ScienceGov, JSTOR, Google Scholar, and paper references, conducted in June and October 2024, identified empirical studies measuring human detection of high-quality deepfakes. After pooling accuracy, odds-ratio, and sensitivity ( _d'_) effect sizes ( _k_ = 137 effects) from 56 papers involving 86,155 participants, we analyzed 1) overall deepfake detection performance, 2) performance across stimulus types (audio, image, text, and video), and 3) the effects of detection-improvement strategies. Overall deepfake dete

... (truncated, 4 KB total)
Resource ID: 5c1ad27ec9acc6f4 | Stable ID: NDFlYTRkZj