Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Carnegie Endowment

Though mislabeled as a WEF report, this is a Carnegie Endowment policy guide on disinformation countermeasures; relevant to AI safety discussions around AI-generated misinformation, content moderation policy, and governance of information ecosystems.

Metadata

Importance: 42/100policy briefanalysis

Summary

This Carnegie Endowment report provides an evidence-based policy guide for countering disinformation, synthesizing research on what interventions actually work. It evaluates a range of strategies—from platform regulation to media literacy—and offers actionable recommendations for policymakers seeking to address information integrity threats.

Key Points

  • Reviews empirical evidence on the effectiveness of various anti-disinformation interventions, distinguishing proven approaches from unproven ones.
  • Covers platform-level, government, and civil society responses to disinformation, including content moderation, labeling, and media literacy programs.
  • Emphasizes the importance of evidence-based policymaking over reactive or politically-motivated responses to information threats.
  • Highlights risks of over-correction, such as censorship or chilling effects on legitimate speech when countering disinformation.
  • Relevant to AI governance as AI-generated content and synthetic media increasingly intersect with disinformation challenges.

Cited by 1 page

PageTypeQuality
AI-Era Epistemic InfrastructureApproach59.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
![Countering Disinformation Effectively: An Evidence-Based Policy Guide](https://assets.carnegieendowment.org/_/eyJrZXkiOiJzdGF0aWMvbWVkaWEvaW1hZ2VzL0JhdGVtYW4tY292ZXJfZGlzaW5mb3JtYXRpb25fZDMwNTAxNTgtNjcwYS00ZDQyLWE0NWYtYTFhMWE0OTU0ZWRmLTEucG5nIn0=)

Report

## Countering Disinformation Effectively: An Evidence-Based Policy Guide

A high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation.

Link Copied

By [Jon Bateman](https://carnegieendowment.org/people/jon-bateman) and Dean Jackson

Published on Jan 31, 2024

### Summary

Disinformation is widely seen as a pressing challenge for democracies worldwide. Many policymakers are grasping for quick, effective ways to dissuade people from adopting and spreading false beliefs that degrade democratic discourse and can inspire violent or dangerous actions. Yet disinformation has proven difficult to define, understand, and measure, let alone address.

Even when leaders know what they want to achieve in countering disinformation, they struggle to make an impact and often don’t realize how little is known about the effectiveness of policies commonly recommended by experts. Policymakers also sometimes fixate on a few pieces of the disinformation puzzle—including novel technologies like social media and artificial intelligence (AI)—without considering the full range of possible responses in realms such as education, journalism, and political institutions.

This report offers a high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. It distills core insights from empirical research and real-world data on ten diverse kinds of policy interventions, including fact-checking, foreign sanctions, algorithmic adjustments, and counter-messaging campaigns. For each case study, we aim to give policymakers an informed sense of the prospects for success—bridging the gap between the mostly meager scientific understanding and the perceived need to act. This means answering three core questions: How much is known about an intervention? How effective does the intervention seem, given current knowledge? And how easy is it to implement at scale?

#### Overall Findings

- **There is no silver bullet or “best” policy option.** None of the interventions considered in this report were simultaneously well-studied, very effective, and easy to scale. Rather, the utility of most interventions seems quite uncertain and likely depends on myriad factors that researchers have barely begun to probe. For example, the precise wording and presentation of social media labels and fact-checks can matter a lot, while counter-messaging campaigns depend on a delicate match of receptive audiences with credible speakers. Bold claims that any one policy is the singular, urgent solution to disinformation should be treated with caution.
- **Policymakers should set realistic

... (truncated, 98 KB total)
Resource ID: d25a731963a5a372 | Stable ID: ZTZhNWYzZT