Skip to content
Longterm Wiki
Back

Oxford Internet Institute: Computational Propaganda

web
comprop.oii.ox.ac.uk·comprop.oii.ox.ac.uk/

This research group is a leading academic source on AI-enabled influence operations and political manipulation; relevant to AI safety discussions around misuse, information integrity, and societal-scale harms from deployed AI systems.

Metadata

Importance: 52/100homepage

Summary

The Oxford Internet Institute's Computational Propaganda project investigates how digital technologies, bots, and algorithmic systems are weaponized to manipulate public opinion and undermine democratic processes. Using computational and social science methods, the project analyzes disinformation campaigns, social media manipulation, and platform dynamics across multiple countries. Their research informs policy responses to coordinated inauthentic behavior and influence operations.

Key Points

  • Tracks and analyzes state-sponsored and automated disinformation campaigns across social media platforms globally
  • Produces country-by-country reports on organized social media manipulation and computational propaganda tactics
  • Examines how algorithms and AI tools are leveraged to amplify misinformation and political messaging at scale
  • Bridges academic research and policy, providing evidence-based recommendations for platform governance and election integrity
  • Relevant to AI safety as AI-powered influence operations represent a near-term misuse risk with significant societal consequences

Review

The Computational Propaganda project at the Oxford Internet Institute represents a critical interdisciplinary approach to understanding how digital technologies can be weaponized to distort public discourse and undermine democratic institutions. Led by Professor Philip Howard, the research spans multiple domains including sociology, information studies, and international affairs, with a focus on examining how algorithms, automation, and strategic communication techniques can be used to spread misleading information. The project's methodology combines computational analysis, qualitative research, and big data approaches to map and understand the complex ecosystem of online propaganda. By investigating topics like anti-vaccine communities, political misinformation, and coordinated influence campaigns, the researchers provide nuanced insights into how digital platforms can be manipulated. Their work has significant implications for AI safety, highlighting the potential risks of computational systems being used to spread harmful narratives and demonstrating the need for robust governance frameworks to mitigate these threats.

Cited by 2 pages

Resource ID: 6482a9b515875f49 | Stable ID: ZjU3N2EwY2