Skip to content
Longterm Wiki
Back

Detection tools unreliable

paper

Authors

Aditya Vempaty·Bhavya Kailkhura·Pramod K. Varshney

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: Nature

Nature article reporting on research showing that AI detection tools are unreliable at identifying AI-generated academic content, highlighting risks to research integrity and the need for improved detection methods.

Paper Details

Citations
0
Year
2018
Methodology
book-chapter
Categories
Secure Networked Inference with Unreliable Data So

Metadata

news articlenews

Summary

A preprint study found that AI chatbots like ChatGPT can generate research paper abstracts that are convincing enough to fool scientists into believing they are human-written. The research, posted on bioRxiv in December 2022, demonstrates that current detection methods are unreliable at identifying AI-generated academic content. This finding has sparked debate within the scientific community about the implications for research integrity and the need for better detection tools or policies to address AI-generated submissions.

Cited by 1 page

PageTypeQuality
Scientific Knowledge CorruptionRisk91.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20267 KB
Abstracts written by ChatGPT fool scientists 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 
 
 

 
 
 
 

 
 

 

 

 

 

 
 
 
 

 
 
 
 
 
 
 Skip to main content 

 
 
 
 Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
 the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
 Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
 and JavaScript.

 
 

 

 

 
 
 

 
 
 Advertisement

 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 

 
 
 
 

 

 

 

 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Email 
 
 
 

 
 
 Bluesky 
 
 
 

 
 
 Facebook 
 
 
 

 
 
 LinkedIn 
 
 
 

 
 
 Reddit 
 
 
 

 
 
 Whatsapp 
 
 
 

 
 
 X 
 
 
 

 

 
 

 
 

 
 
 
 
 
 
 
 
 
 
 Access through your institution 
 
 
 
 
 
 
 
 Buy or subscribe 
 
 
 
 
 
 

 
 
 

 
 

 

 
 An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December 1 . Researchers are divided over the implications for science.

 
 Access options

 

 
 
 
 
 
 
 
 
 Access through your institution 
 
 
 
 
 
 

 
 
 
 
 
 Access Nature and 54 other Nature Portfolio journals
 

 
 Get Nature+, our best-value online-access subscription
 

 
 
 $32.99 / 30 days 
 

 cancel any time

 
 
 Learn more 
 
 
 
 
 Subscribe to this journal

 
 Receive 51 print issues and online access
 

 
 
 $199.00 per year

 only $3.90 per issue

 
 
 
 
 Learn more 
 
 
 
 
 
 Rent or buy this article

 Prices vary by article type

 from $1.95

 to $39.95

 
 Learn more 
 
 
 Prices may be subject to local taxes which are calculated during checkout

 
 

 
 
 
 

 
 
 
 
 
 Nature 613 , 423 (2023)

 doi: https://doi.org/10.1038/d41586-023-00056-7 

 
 References

 Gao, C. A. et al. Preprint at bioRxiv https://doi.org/10.1101/2022.12.23.521610 (2022).

 Blanco-Gonzalez, A. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2212.08104 (2022).

 O’Connor, S. & ChatGPT Nurse Educ. Pract. 66 , 103537 (2023).

 Article  
 
 Google Scholar  
 

 Download references 

 
 
 
 
 Reprints and permissions 

 
 

 
 Related Articles

 
 
 
 
 
 
 
 Are ChatGPT and AlphaCode going to replace programmers?
 
 

 

 
 
 
 
 
 
 
 AI bot ChatGPT writes smart essays — should professors worry?
 
 

 

 
 
 
 
 
 
 
 Could AI help you to write your next paper?
 
 

 

 
 

 

 
 Subjects

 
 
 
 
 Publishing 
 

 
 
 Machine learning 
 

 
 
 Mathematics and computing 
 

 
 

 
 Latest on:

 
 
 
 Publishing 
 

 
 
 Machine learning 
 

 
 
 Mathematics and computing 
 

 
 

 
 
 
 
 
 
 
 
 
 How bioRxiv changed the way biologists share ideas – in numbers

 News 12 MAR 26 

 
 

 
 
 
 
 The ethical risks of open-access agreements bei

... (truncated, 7 KB total)
Resource ID: bdafdb8bd5a0332e | Stable ID: N2I0ZWQwYT