Skip to content
Longterm Wiki
Back

Bail et al. 2018 - Exposure to Opposing Views on Social Media Can Increase Political Polarization

web

Authors

Ernst Friedberger·Oskar Bail·Richard Pfeiffer

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: PNAS

Relevant to AI safety discussions around recommendation algorithms, epistemic bubbles, and how AI-mediated information environments may reinforce rather than correct political polarization, with implications for alignment and governance of social AI systems.

Paper Details

Citations
0
Year
1919
Methodology
book

Metadata

Importance: 52/100journal articleprimary source

Summary

This PNAS study by Bail et al. experimentally tested whether exposure to opposing political views on social media reduces polarization. Contrary to the 'echo chamber' correction hypothesis, they found that Republicans who followed a liberal bot became more conservative, and Democrats showed similar but weaker effects, suggesting algorithmic exposure to opposing views can backfire.

Key Points

  • Randomized experiment had ~1,200 Twitter users follow bots retweeting opposing political party content for one month
  • Republicans exposed to liberal content became significantly more conservative, contradicting echo-chamber mitigation assumptions
  • Results suggest 'backfire effects' where cross-cutting exposure reinforces rather than moderates existing political beliefs
  • Challenges common policy prescriptions that increasing exposure to diverse views will reduce polarization
  • Has implications for AI content recommendation systems and how algorithmic curation shapes political epistemics

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Mar 15, 20260 KB
### Sign up for the_PNAS Highlights_ newsletter

[Subscribe for Research Updates](https://www.pnas.org/pnas-highlights)
Resource ID: 23a9c979fe23842a | Stable ID: ZjE1ODExNT