Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: The New York Times

Relevant to AI safety discussions around AI-enabled disinformation, synthetic media misuse, and governance challenges in regulating automated or semi-automated influence operations that could be accelerated by generative AI tools.

Metadata

Importance: 52/100news articlenews

Summary

This NYT investigation reveals how private companies have commoditized disinformation, offering influence operations as a service to political clients, corporations, and governments. It documents the shift from state-sponsored to commercially-driven disinformation, with firms offering fake accounts, coordinated inauthentic behavior, and narrative manipulation. The piece highlights how enforcement gaps and platform limitations allow these operations to persist.

Key Points

  • Private firms now offer disinformation-as-a-service, lowering barriers for any paying client to run influence operations previously limited to nation-states.
  • Operations include creation of fake social media accounts, astroturfing campaigns, and coordinated narrative manipulation across platforms.
  • The commercialization of disinformation complicates attribution and enforcement, as it obscures who is ultimately responsible.
  • Social media platforms struggle to detect and remove these operations at scale, often only acting reactively after researchers expose them.
  • This trend has significant implications for democratic integrity, public trust in information, and AI-generated synthetic media risks.
Resource ID: 28b6a4aef8f1d1da | Stable ID: MWY2NDRmOD