Bail et al. 2018 - Exposure to Opposing Views on Social Media Can Increase Political Polarization
webAuthors
Credibility Rating
Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.
Rating inherited from publication venue: PNAS
Relevant to AI safety discussions around recommendation algorithms, epistemic bubbles, and how AI-mediated information environments may reinforce rather than correct political polarization, with implications for alignment and governance of social AI systems.
Paper Details
Metadata
Summary
This PNAS study by Bail et al. experimentally tested whether exposure to opposing political views on social media reduces polarization. Contrary to the 'echo chamber' correction hypothesis, they found that Republicans who followed a liberal bot became more conservative, and Democrats showed similar but weaker effects, suggesting algorithmic exposure to opposing views can backfire.
Key Points
- •Randomized experiment had ~1,200 Twitter users follow bots retweeting opposing political party content for one month
- •Republicans exposed to liberal content became significantly more conservative, contradicting echo-chamber mitigation assumptions
- •Results suggest 'backfire effects' where cross-cutting exposure reinforces rather than moderates existing political beliefs
- •Challenges common policy prescriptions that increasing exposure to diverse views will reduce polarization
- •Has implications for AI content recommendation systems and how algorithmic curation shapes political epistemics
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Sycophancy Feedback Loop Model | Analysis | 53.0 |
| AI Preference Manipulation | Risk | 55.0 |
Cached Content Preview
### Sign up for the_PNAS Highlights_ newsletter [Subscribe for Research Updates](https://www.pnas.org/pnas-highlights)
23a9c979fe23842a | Stable ID: ZjE1ODExNT