Stanford HAI: The Disinformation Machine
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Stanford HAI
Published by Stanford HAI, this piece is relevant to AI safety discussions around misuse risks, particularly how large language models can be weaponized for influence operations and the societal harms that may result from unchecked AI content generation.
Metadata
Summary
Stanford HAI examines the growing threat of AI-generated disinformation and propaganda, exploring how susceptible individuals and societies are to algorithmically produced misleading content. The piece investigates the mechanisms by which AI systems can generate and amplify false narratives, and considers implications for democratic processes and public trust.
Key Points
- •AI systems can generate highly convincing disinformation at scale, making it harder to distinguish from authentic human-produced content.
- •Research suggests people may be more susceptible to AI-generated propaganda than previously assumed, with limited ability to detect synthetic text.
- •The scale and speed of AI content generation poses novel challenges for existing fact-checking and content moderation infrastructure.
- •Mitigation strategies may include watermarking AI content, improved media literacy, and policy-level interventions on AI deployment.
- •The intersection of AI capabilities and information warfare raises significant concerns for electoral integrity and societal cohesion.
Cached Content Preview
[Skip to content](https://hai.stanford.edu/news/disinformation-machine-how-susceptible-are-we-ai-propaganda#site-main)
[HAI Stanford University Human-Centered Artificial Intelligence](https://hai.stanford.edu/)
[Search](https://hai.stanford.edu/search)
- [HAI Stanford HAI on bluesky](https://bsky.app/profile/stanfordhai.bsky.social)
- [HAI Stanford HAI on x](https://twitter.com/StanfordHAI)
- [HAI Stanford HAI on facebook](https://www.facebook.com/StanfordHAI)
- [HAI Stanford HAI on youtube](https://www.youtube.com/channel/UChugFTK0KyrES9terTid8vA)
- [HAI Stanford HAI on linkedin](https://www.linkedin.com/company/stanfordhai)
- [HAI Stanford HAI on instagram](https://www.instagram.com/stanfordhai)
[Search](https://hai.stanford.edu/search)
iStock
With a bit of prodding, AI-generated propaganda is more effective than propaganda written by humans.
[Michael Tomz](https://politicalscience.stanford.edu/people/michael-tomz), a professor of political science at Stanford School of Humanities and Sciences and faculty affiliate at the Stanford Institute for Human-Centered AI (HAI), recently gave a talk in Taiwan about the use of AI to generate propaganda. That morning, he recalled, he saw a headline in the _Taipei Times_ reporting that the Chinese government was using AI-generated social media posts to influence voters in Taiwan and the United States.
“That very day, the newspaper was documenting the Chinese government doing exactly what I was presenting on,” Tomz said.
AI propaganda is here. But is it persuasive? [Recent research](https://academic.oup.com/pnasnexus/article/3/2/pgae034/7610937?login=false) published in _PNAS Nexus_ and conducted by Tomz, [Josh Goldstein](https://cset.georgetown.edu/staff/josh-a-goldstein/) from the Center for Security and Emerging Technology at Georgetown University, and three Stanford colleagues—master’s student [Jason Chao](https://profiles.stanford.edu/jason-chao), research scholar [Shelby Grossman](https://fsi.stanford.edu/people/shelby-grossman), and lecturer [Alex Stamos](https://cyber.fsi.stanford.edu/people/alex-stamos)—examined the effectiveness of AI-generated propaganda.
They found, in short, that it works.
### When Large Language Models Lie
The researchers conducted an experiment, funded by HAI, in which participants were assigned to one of three groups.
_Read the full study, [How persuasive is AI-generated propaganda?](https://academic.oup.com/pnasnexus/article/3/2/pgae034/7610937?login=false)_
The first group, the control, read a series of thesis statements on subjects that known propagandists want people to believe. “Most U.S. drone strikes in the Middle East have targeted civilians, rather than terrorists,” for instance. Or, “Western sanctions have led to a shortage of medical supplies in Syria.” Because this group read _only_ these statements andno propaganda related to them, it provided the researchers with a baseline understanding of how many people believe these claims. The second group of
... (truncated, 10 KB total)cfd7b21d0ae4298d | Stable ID: NDA5ZTIyZT