Skip to content
Longterm Wiki
Back

Rabbit Hole: What Is the Internet Doing to Us? (NYT Kevin Roose)

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: The New York Times

Journalistic series relevant to AI safety concerns about recommendation algorithms, persuasion, and epistemic autonomy; useful background for understanding real-world harms from deployed AI systems shaping information environments.

Metadata

Importance: 42/100opinion piecenews

Summary

A New York Times column and podcast series by tech journalist Kevin Roose exploring how algorithmic recommendation systems, online radicalization, and digital life shape human behavior and belief. The series investigates how platforms exploit attention and push users toward extreme content, with implications for autonomy and persuasion.

Key Points

  • Investigates how internet algorithms and recommendation systems influence beliefs, behaviors, and radicalization pathways
  • Documents real case studies of individuals drawn into online rabbit holes, illustrating persuasion and autonomy risks
  • Explores the societal consequences of moving life increasingly online, including mental health and political polarization effects
  • Relevant to AI safety discussions around persuasive AI, recommendation systems, and erosion of epistemic autonomy

Cited by 1 page

PageTypeQuality
AI Preference ManipulationRisk55.0

Cached Content Preview

HTTP 200Fetched Mar 31, 20265 KB
Rabbit Hole - The New York Times
 

 
 
 

 
 
 
 

 

 

 

 
 

 
 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 

 

 
 
 
 

 Feb
 MAR
 Apr
 

 
 

 
 24
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 Organization: Archive Team
 

 

 Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.


History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.


The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.


This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work. 


Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.


The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures. 

 

 

 

 
 
Collection: ArchiveBot: The Archive Team Crowdsourced Crawler

 

 

 ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).

To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note yo

... (truncated, 5 KB total)
Resource ID: 3767db8f76073b0b | Stable ID: ZDBiN2YyND