Skip to content
Longterm Wiki
Back

Algorithmic amplification of political content

paper

Authors

Nyhan, Brendan·Settle, Jaime·Thorson, Emily·Wojcieszak, Magdalena·Barberá, Pablo·Chen, Annie Y.·Allcott, Hunt·Brown, Taylor·Crespo-Tenorio, Adriana·Dimmery, Drew·Freelon, Deen·Gentzkow, Matthew·González-Bailón, Sandra·Guess, Andrew M.·Kennedy, Edward·Kim, Young Mie·Lazer, David·Malhotra, Neil·Moehler, Devra·Pan, Jennifer·Thomas, Daniel Robert·Tromble, Rebekah·Rivera, Carlos Velasco·Wilkins, Arjun·Xiong, Beixian· de Jonge, Chad Kiewiet·Franco, Annie·Mason, Winter·Stroud, Natalie Jomini·Tucker, Joshua A.

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: Nature

A high-profile empirical study in Nature examining how social media recommendation algorithms amplify political content, relevant to AI safety discussions about value alignment, autonomy preservation, and the societal impacts of deployed ML systems.

Metadata

Importance: 62/100journal articleprimary source

Summary

This Nature paper empirically investigates how recommendation algorithms on major social media platforms differentially amplify political content, finding systematic biases in how content is surfaced to users. The study provides large-scale evidence that algorithmic curation can shape political information exposure in ways that may affect democratic discourse and user autonomy.

Key Points

  • Empirically measures algorithmic amplification of political content across multiple major social media platforms using large-scale data.
  • Finds that algorithms systematically favor certain political content over others, raising concerns about unintended influence on political discourse.
  • Highlights tension between platform engagement optimization and neutral information delivery to users.
  • Raises governance questions about transparency and accountability of recommendation systems in politically sensitive contexts.
  • Relevant to AI safety concerns about persuasive AI systems and the autonomy-preserving properties of deployed ML systems.

Cited by 1 page

PageTypeQuality
AI Preference ManipulationRisk55.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202673 KB
Like-minded sources on Facebook are prevalent but not polarizing | Nature 
 
 
 

 
 

 
 

 

 
 
 
 

 

 
 
 
 
 
 

 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 

 
 

 

 

 
 

 
 
 

 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 

 
 Skip to main content 

 
 
 
 Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
 the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
 Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
 and JavaScript.

 
 

 

 

 
 
 

 
 
 Advertisement

 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 

 
 
 
 

 

 

 
 
 
 
 
 
 
 Like-minded sources on Facebook are prevalent but not polarizing
 
 
 
 
 
 
 Download PDF 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Download PDF 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 Subjects

 
 Communication 
 Politics 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 An Author Correction to this article was published on 01 November 2023

 
 
 
 

 
 
 
 
 
 
 
 
 
 This article has been updated 

 
 
 

 
 

 
 Abstract

 Many critics raise concerns about the prevalence of ‘echo chambers’ on social media and their potential role in increasing political polarization. However, the lack of available data and the challenges of conducting large-scale field experiments have made it difficult to assess the scope of the problem 1 , 2 . Here we present data from 2020 for the entire population of active adult Facebook users in the USA showing that content from ‘like-minded’ sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures. To evaluate a potential response to concerns about the effects of echo chambers, we conducted a multi-wave field experiment on Facebook among 23,377 users for whom we reduced exposure to content from like-minded sources during the 2020 US presidential election by about one-third. We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.

 

 
 
 

 

... (truncated, 73 KB total)
Resource ID: 0bf075dd08612043 | Stable ID: MzBiZWZjNj