Skip to content
Longterm Wiki
Back

Harvard Kennedy School Misinformation Review article

web

A contrarian but evidence-grounded perspective useful for AI safety researchers and policymakers evaluating the actual risks of AI-generated misinformation, countering common assumptions in AI governance discussions about information integrity.

Metadata

Importance: 52/100journal articleanalysis

Summary

This Harvard Kennedy School Misinformation Review article argues that fears about generative AI dramatically worsening the misinformation landscape are exaggerated, drawing on empirical evidence about how misinformation actually spreads and is consumed. The authors contend that psychological and sociological factors limiting misinformation uptake pre-AI remain relevant, and that demand-side constraints on belief change are often underappreciated. The piece offers a counterpoint to alarmist narratives about AI-generated content flooding the information ecosystem.

Key Points

  • Empirical research suggests people are less susceptible to misinformation than commonly assumed, limiting the practical impact of AI-generated false content.
  • The supply of misinformation has always exceeded demand; AI increasing supply does not necessarily translate to increased belief or harm.
  • Existing cognitive and social mechanisms that constrain misinformation spread will likely continue to operate even with AI-generated content.
  • Overblown fears risk misallocating resources and attention toward AI misinformation rather than more pressing, evidence-backed information threats.
  • The article calls for more empirically grounded, measured assessments of AI's actual versus hypothetical impact on public epistemics.

Cited by 1 page

PageTypeQuality
Epistemic CollapseRisk49.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202640 KB
[Skip to main content](https://misinforeview.hks.harvard.edu/article/misinformation-reloaded-fears-about-the-impact-of-generative-ai-on-misinformation-are-overblown/#main-content)

_Many observers of the current explosion of generative AI worry about its impact on our information environment, with concerns being raised about the increased quantity, quality, and personalization of misinformation. We assess these arguments with evidence from communication studies, cognitive science, and political science. We argue that current concerns about the effects of generative AI on the misinformation landscape are overblown._

###### By

[Felix M. Simon](https://misinforeview.hks.harvard.edu/article/author/felix-m-simon "Posts by Felix M. Simon")

Oxford Internet Institute, University of Oxford, UK

[Sacha Altay](https://misinforeview.hks.harvard.edu/article/author/sacha-altay1 "Posts by Sacha Altay")

Department of Political Science, University of Zurich, Switzerland

[Hugo Mercier](https://misinforeview.hks.harvard.edu/article/author/hugo-mercier "Posts by Hugo Mercier")

Institut Jean Nicod, Département d’études cognitives, ENS, EHESS, PSL University, CNRS, France

###### Article Metrics

![CrossRef](https://misinforeview.hks.harvard.edu/wp-content/themes/ristretto/img/crossref-logo.png)

53

CrossRef Citations

2312

PDF Downloads

42594

Page Views

![](https://misinforeview.hks.harvard.edu/wp-content/uploads/2023/10/AlanWarburton-SocialMedia-2-1024x614.png)Image by [Alan Warburton](https://alanwarburton.co.uk/ "") on [Better Images on AI](https://betterimagesofai.org/images?artist=AlanWarburton&title=SocialMedia "")

###### Topics

- [Artificial Intelligence](https://misinforeview.hks.harvard.edu/explore/?fwp_topic=artificial-intelligence "Artificial Intelligence")
- / [Impact](https://misinforeview.hks.harvard.edu/explore/?fwp_topic=impact "Impact")

* * *

## **Introduction**

Recent progress in generative AI has led to concerns that it will “trigger the next misinformation nightmare” (Gold & Fisher, 2023), that people “will not be able to know what is true anymore” (Metz, 2023), and that we are facing a “tech-enabled Armageddon” (Scott, 2023).

Generative AI systems are capable of generating new forms of data by applying machine learning to large quantities of training data. This new data can include text (such as Google’s Bard, Meta’s LLaMa, or OpenAI’s ChatGPT), visuals (such as Stable Diffusion or OpenAI’s DALL-E), or audio (such as Microsoft’s VALL-E). The output that can be produced with these systems at great speed and ease for a majority of users is, depending on the instructions, sufficiently sophisticated that humans can perceive it as indistinguishable from human-generated content (Groh et al., 2022).

According to various voices, including some leading AI researchers, generative AI will make it easier to create realistic but false or misleading content at scale, with potentially catastrophic outcomes for people’s beliefs and behaviors, t

... (truncated, 40 KB total)
Resource ID: e4d7abe6d2b4ef5d | Stable ID: ZWU0YTFlYz