All Source Checks
Automated source checking of wiki data against original sources. Each record is checked against one or more external sources to confirm accuracy.
View internal dashboard with coverage & action queue →Verified Correct
8,620
70% of checked
Has Issues
2,447
20% of checked
Can't Verify
1,233
10% of checkedincl. 38 dead links
Not Yet Checked
0
of 12,300 total
Contradicted
384
Fix now — data may be wrong
Outdated
23
Source has newer info
Accuracy Rate
95%
confirmed / (confirmed + wrong + outdated)
Needs Recheck
0
All up to date
AI Index Report 2024
Societal Adaptation to Advanced AI
Risk Thresholds for Frontier AI
Visibility into AI Agents
ABOUT ML
An Early Look at the Labor Market Impact Potential of LLMs
Risk Mitigation Strategies for the Open Foundation Model Value Chain
Beyond Open vs. Closed: Foundation AI Model Governance
Science and Global Health R&D
Brown University record
Climate
IDs for AI Systems
Generative AI, the American Worker, and the Future of Work
Brown University record
Animal Welfare Fund
Safeguarded AI Programme
Global Health and Wellbeing
Brown University record
An Agenda to Strengthen U.S. Democracy in the Age of AI
Infrastructure for AI Agents
Emboldened Offenders, Endangered Communities: Internet Shutdowns in 2024
Lead Exposure Action Fund (LEAF)
Christina Paxson @ Brown University
Responsibly Navigating the Enterprise AI Landscape
Global Catastrophic Risks
Biosecurity & Pandemic Preparedness
Forecasting
The Coming AI Backlash Will Shape Future Regulation
Third-Party Compliance Reviews for Frontier AI Safety Frameworks
Science Systems
2025 Landscape Report
HdH8_w6BFy
Forecasting LLM-Enabled Biorisk and the Efficacy of Safeguards
Navigating Transformative AI
AlgorithmWatch record
Global Aid Policy
AI System-to-Model Innovation
Center for Humane Technology record
EA Infrastructure Fund
A Research Agenda
Policy & Advocacy
Pulling Back the Curtain on China's Military-Civil Fusion
The Use of Open Models in Research
Farm Animal Welfare
Maxime Fournes @ PauseAI
AI Governance at the Frontier
Access Now record
AI Safety Index Winter 2025
Aza Raskin @ Center for Humane Technology
TA1.1 — Theory (Scaffolding)
Data from source_check_verdicts table. Click a row to view detailed evidence.