All Source Checks
Automated source checking of wiki data against original sources. Each record is checked against one or more external sources to confirm accuracy.
View internal dashboard with coverage & action queue →Verified Correct
8,618
70% of checked
Has Issues
2,458
20% of checked
Can't Verify
1,230
10% of checkedincl. 38 dead links
Not Yet Checked
0
of 12,306 total
Contradicted
385
Fix now — data may be wrong
Outdated
26
Source has newer info
Accuracy Rate
95%
confirmed / (confirmed + wrong + outdated)
Needs Recheck
0
All up to date
General Support (Survival and Flourishing Fund (SFF) -> LessWrong)
Grant to SaferAI (Survival and Flourishing Fund (SFF) -> SaferAI)
General Support of Centre for the Study of Existential Risk, University of Cam…
Grant to Legal Advocates for Safe Science and Technology (LASST) (Survival and…
Grant to Center for AI Safety (Survival and Flourishing Fund (SFF) -> 60)
Fund Sentinel for Q1-2025 (Manifund -> 248)
Grant to Center for AI Safety, Inc. (Survival and Flourishing Fund (SFF) -> 60)
Compute and other expenses for LLM alignment research (Manifund -> Ethan Josea…
General support of Toronto and Michigan NLP Group for AI Safety (Survival and …
General Support of Centre for the Study of Existential Risk, University of Cam…
General support of AI Strategy Team (Survival and Flourishing Fund (SFF) -> Re…
General Support of LessWrong (Survival and Flourishing Fund (SFF) -> 205)
Grant to FAR AI (Survival and Flourishing Fund (SFF) -> FAR AI)
General support of 80,000 hours (Survival and Flourishing Fund (SFF) -> 1)
General Support (Survival and Flourishing Fund (SFF) -> Elicit (AI Research To…
Grant to Quantified Uncertainty Research Institute (Survival and Flourishing F…
General Support of BERI-CHAI collaboration (Survival and Flourishing Fund (SFF…
Center for International Security and Cooperation — Megan Palmer’s Biosecurity…
Nuclear Threat Initiative — Projects to Reduce Global Catastrophic Biological …
Open Phil AI Fellowship — 2020 Class (Coefficient Giving -> Coefficient Giving)
Nuclear Threat Initiative — Biosecurity Program Support (Coefficient Giving ->…
Center for Security and Emerging Technology — General Support (January 2021) (…
Hypermind — COVID-19 Forecasting (Coefficient Giving -> Good Judgment (Forecas…
Open Phil AI Fellowship — 2021 Class (Coefficient Giving -> Coefficient Giving)
Blue Ribbon Study Panel on Biodefense — General Support (2016) (Coefficient Gi…
Yale University School of Medicine — Vascular Therapy Research (Coefficient Gi…
Open Phil AI Fellowship — 2018 Class (Coefficient Giving -> Coefficient Giving)
Ought — General Support (2019) (Coefficient Giving -> Elicit (AI Research Tool))
Open Phil AI Fellowship — 2022 Class (Coefficient Giving -> Coefficient Giving)
Nuclear Threat Initiative — Biosecurity Program Support (May 2020) (Coefficien…
Amanda Askell at Anthropic (Character Lead)
Daniela Amodei at Anthropic (Co-founder, President)
Dario Amodei at Anthropic (CEO)
Jan Leike at Anthropic (Head of Alignment Science)
Mustafa Suleyman at Google DeepMind (Co-founder, Head of Applied AI)
Scott Alexander at Astral Codex Ten / Slate Star Codex (Author)
Lilian Weng at OpenAI (VP of Research)
Andrew Ng at Stanford University (Professor of Computer Science)
Sara Hooker at Cohere For AI (Founder & Head of Research)
Helen Toner at Open Philanthropy (Senior Analyst)
Greg Brockman at Stripe (CTO)
Wojciech Zaremba at OpenAI (Co-founder, Head of Codex)
Jason Wei at Google Brain (Research Scientist)
Margrethe Vestager at European Commission (Executive VP for Competition & Digi…
Jason Matheny at White House OSTP (Deputy Assistant to the President)
Mira Murati at OpenAI (CTO)
Zvi Mowshowitz at Independent (Blogger & AI Commentator)
Scott Aaronson at OpenAI (Guest Researcher)
Andrew Ng at Google Brain (Co-founder)
Arvind Narayanan at Princeton University (Professor of Computer Science)
Data from source_check_verdicts table. Click a row to view detailed evidence.