Skip to content
Longterm Wiki
Back

Tow Center for Digital Journalism

web

Relevant to AI safety discussions around dual-use risks of AI systems and governance of AI deployment in high-stakes information environments; useful for understanding real-world tensions in AI-assisted content moderation and fact-checking.

Metadata

Importance: 42/100blog postanalysis

Summary

This EDMO article explores the dual role of AI in the information ecosystem: while AI systems generate and amplify misinformation at scale, they are also being deployed as tools to assist fact-checkers in combating false content. The piece examines the tensions and paradoxes fact-checking organizations face when adopting AI technologies that also power the misinformation they are trying to counter.

Key Points

  • AI systems simultaneously enable the mass production of misinformation and offer tools that can help fact-checkers detect and debunk false claims.
  • Fact-checking organizations face a paradox in relying on AI tools built by the same companies whose platforms and models spread misinformation.
  • AI-assisted fact-checking can improve speed and scale but raises concerns about accuracy, bias, and editorial independence.
  • The article highlights the need for governance frameworks to ensure AI is used responsibly in the information verification ecosystem.
  • Human oversight remains essential, as fully automated fact-checking risks propagating errors or being gamed by bad actors.

Cited by 1 page

PageTypeQuality
AI-Era Epistemic InfrastructureApproach59.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202615 KB
[Skip to main content](https://edmo.eu/blog/part-of-the-problem-and-part-of-the-solution-the-paradox-of-ai-in-fact-checking/#main) [Scroll Top](https://edmo.eu/blog/part-of-the-problem-and-part-of-the-solution-the-paradox-of-ai-in-fact-checking/#page)

[![Cover Image (3)](https://edmo.eu/wp-content/uploads/2025/07/Cover-Image-3-thegem-blog-default-large.png)](https://edmo.eu/blog/part-of-the-problem-and-part-of-the-solution-the-paradox-of-ai-in-fact-checking/)

### [Blog Posts](https://edmo.eu/resources/edmo-blog/)

July 2, 2025

### [Blog Posts](https://edmo.eu/resources/edmo-blog/)

July 2, 2025

#### Part of the problem and part of the solution: the paradox of AI in fact-checking

**Part of the problem and part of the solution: the paradox of AI in fact-checking**

_The views expressed in this publication are those of the author and do not necessarily reflect the official stance of the European Digital Media Observatory._

_Authors:_ _Laurence Dierickx, Carl-Gustav Lindén, Duc-Tien Dang-Nguyen. [University of Bergen](https://www.uib.no/en), [NORDIS](https://www.nordishub.eu/)._

**The paradox of AI-based technology in fact-checking lies in its dual nature as both a tool to help verify facts and a tool to create or amplify information disorder.**

AI was at the forefront of the [Brussels EDMO Annual Conference](https://edmo.eu/event/2024-edmo-annual-conference/#programme) organised in May 2024. The panel on “AI part of the solution” highlighted AI’s dual role in fact-checking and fighting misinformation. AI helps identify disinformation actors, trace narratives, and assist fact-checkers by spotting patterns and verifying claims. However, human-in-the-loop approaches are needed to guarantee the accuracy and reliability of the verdicts. At the same time, AI can improve the quality and quantity of disinformation, making it [both an ally and an adversary](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0303183) of fact-checkers. One year later, how has this landscape changed?

**Disinformation, censorship and ethical concerns**

Over the past twelve months, much has been written and debated about whether “the AI threat” – specifically “the generative AI threat” – [has been overstated](https://misinforeview.hks.harvard.edu/article/misinformation-reloaded-fears-about-the-impact-of-generative-ai-on-misinformation-are-overblown/). Of course, generative AI (GAI) did not invent disinformation; information disorder existed long before ChatGPT emerged, and many people have been exposed to misleading content online for years. Concerns about generative AI’s impact on disinformation are genuine, not just moral panic.

Malicious actors are increasingly using AI to amplify disinformation. One notable example is a Russian network that used AI chatbots to spread pro-Kremlin narratives, infiltrating major online platforms to spread false information. [According to an audit by NewsGuard](https://www.newsguardtech.com/special-reports/moscow-based-g

... (truncated, 15 KB total)
Resource ID: 881fde79a514bec3 | Stable ID: ZTU3YjI2N2