Skip to content
Longterm Wiki
Back

Facebook Third-Party Fact-Checking Program

web

Relevant to AI safety discussions around platform governance, content moderation at scale, and the role of AI systems in amplifying or mitigating misinformation — a case study in technology governance and trust.

Metadata

Importance: 38/100homepage

Summary

Facebook's Third-Party Fact-Checking program describes Meta's initiative to partner with independent fact-checkers to identify and reduce the spread of misinformation on the platform. The program outlines how flagged content is reviewed, labeled, and demoted in the algorithm to limit viral spread of false information.

Key Points

  • Partners with certified independent fact-checkers to review and rate potentially false content on Facebook and Instagram.
  • Content rated as false or misleading receives warning labels and is algorithmically demoted to reduce distribution.
  • Publishers who repeatedly share misinformation face reduced reach for all their content.
  • Program is presented as a middle ground between removing content outright and allowing unchecked spread of falsehoods.
  • Raises questions about platform governance, editorial responsibility, and the scalability of human fact-checking.

Cited by 1 page

Resource ID: d27f85e1f545b731 | Stable ID: MDQ5NTVmNT