Skip to content
Longterm Wiki
Back

Research shows humans near random chance

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: ScienceDirect

Relevant to AI deployment and governance debates around academic integrity; demonstrates that current AI text detection—both human and automated—is insufficient, with implications for how AI capabilities are assessed and managed in educational settings.

Metadata

Importance: 42/100journal articleprimary source

Summary

A survey experiment with 63 university lecturers found that both humans and AI detectors perform only slightly better than random chance at identifying AI-generated academic texts, with humans achieving 57% recognition for AI texts and 64% for human texts. Professional-level AI writing was correctly identified by fewer than 20% of participants, raising serious concerns about academic integrity and the reliability of current AI detection methods.

Key Points

  • Human evaluators and AI detectors both identified AI-generated text only marginally above chance (~57% for AI texts vs. 50% baseline).
  • Professional-level AI-generated texts were nearly undetectable, with less than 20% of lecturers correctly classifying them.
  • No statistically significant difference was found between human and machine detection performance.
  • Prior teaching experience slightly improved detection accuracy, but subjective text quality judgments were unaffected by actual authorship.
  • Findings suggest traditional written academic assessments are increasingly vulnerable to undetected AI use, warranting reassessment of evaluation formats.

Cited by 1 page

PageTypeQuality
AI-Powered Consensus ManufacturingRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20265 KB
[Skip to main content](https://www.sciencedirect.com/science/article/pii/S1477388025000131#screen-reader-main-content) [Skip to article](https://www.sciencedirect.com/science/article/pii/S1477388025000131#screen-reader-main-title)

- View **PDF**
- Download full issue

Search ScienceDirect

[![Elsevier](https://www.sciencedirect.com/us-east-1/prod/c7e213f07d8c815be33afc4dbc154d233f811d60/image/elsevier-non-solus.svg)](https://www.sciencedirect.com/journal/international-review-of-economics-education "Go to International Review of Economics Education on ScienceDirect")

## [International Review of Economics Education](https://www.sciencedirect.com/journal/international-review-of-economics-education "Go to International Review of Economics Education on ScienceDirect")

[Volume 49](https://www.sciencedirect.com/journal/international-review-of-economics-education/vol/49/suppl/C "Go to table of contents for this volume/issue"), June 2025, 100321

[![International Review of Economics Education](https://ars.els-cdn.com/content/image/1-s2.0-S1477388025X00025-cov150h.gif)](https://www.sciencedirect.com/journal/international-review-of-economics-education/vol/49/suppl/C)

# Do humans identify AI-generated text better than machines? Evidence based on excerpts from German theses [☆](https://www.sciencedirect.com/science/article/pii/S1477388025000131\#ntp0005) [☆](https://www.sciencedirect.com/science/article/pii/S1477388025000131\#aep-article-footnote-id1)

Author links open overlay panelAlexandraFiedler1, JörgDöpke2

Show more

Add to Mendeley

Cite

[https://doi.org/10.1016/j.iree.2025.100321](https://doi.org/10.1016/j.iree.2025.100321 "Persistent link using digital object identifier") [Get rights and content](https://s100.copyright.com/AppDispatchServlet?publisherName=ELS&contentID=S1477388025000131&orderBeanReset=true)

Under a Creative Commons [license](http://creativecommons.org/licenses/by/4.0/)

Open access

## Highlights

- •
A survey of 63 lecturers revealed that only half of the AI-generated texts could be recognized as such.

- •
Humans recognize AI texts slightly better than AI detectors.

- •
The higher the level of AI-generated texts, the more difficult it is to distinguish them from human texts.

- •
Human assessment of text quality does not depend on whether the text is actually from an AI.


## Abstract

We investigate whether human experts can identify AI-generated academic texts more accurately than current machine-based detectors. Conducted as a survey experiment at a German university of applied sciences, 63 lecturers in engineering, economics, and social sciences were asked to evaluate short excerpts (200–300 words) from both human-generated and AI-generated texts. These texts varied by discipline and writing level (student vs. professional) with the AI-generated content. The results show that both human evaluators and AI detectors correctly identified AI-generated texts only slightly better than chance, with humans achieving a recogniti

... (truncated, 5 KB total)
Resource ID: 42f78f51ca2fdb71 | Stable ID: YTAyZDY2ZT