Gorwa et al.: "Algorithmic Content Moderation"
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: SAGE Journals
Relevant to AI safety discussions around value alignment in deployed systems and accountability in automated decision-making; useful background for understanding content moderation as a real-world alignment challenge at scale.
Metadata
Summary
Gorwa, Binns, and Katzenbach examine how social media platforms increasingly deploy algorithmic systems for content moderation, analyzing the technical limitations and political tensions inherent in automating decisions about permissible speech. The paper argues that algorithmic moderation is neither neutral nor purely technical, but embeds normative choices that have significant governance implications. It calls for greater transparency and accountability in how these systems are designed and deployed.
Key Points
- •Algorithmic content moderation involves embedding normative and political judgments into technical systems, making it inherently a governance challenge, not just an engineering problem.
- •Platforms face structural pressures to automate moderation at scale, but automation introduces errors, bias, and opacity that undermine fair enforcement.
- •The paper identifies a 'values gap' between the complexity of human communication and the bluntness of automated classification systems.
- •Transparency and contestability are identified as key accountability mechanisms lacking in current algorithmic moderation practices.
- •The authors call for interdisciplinary approaches combining computer science, law, and social science to address platform governance challenges.
4800649615e08a10 | Stable ID: YzJhZTE0ZD