Sentinel is a 2024-founded foresight organization led by Nuño Sempere that processes millions of news items weekly through AI filtering and elite forecaster assessment to identify global catastrophic risks, publishing findings via newsletter and podcast. The page describes their multi-stage detection pipeline and forecaster network but lacks independent validation of accuracy claims.
Large Language ModelsConceptLarge Language ModelsComprehensive assessment of LLM capabilities showing training costs growing 2.4x/year ($78-191M for frontier models, though DeepSeek achieved near-parity at $6M), o3 reaching 91.6% on AIME and 87.5...Quality: 62/100
Organizations
ControlAIOrganizationControlAIControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct in...Quality: 63/100Swift CentreOrganizationSwift CentreSwift Centre is a UK forecasting organization that provides conditional forecasting services to various clients including some AI companies, but is not primarily focused on AI safety. While they de...Quality: 50/100Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100Good Judgment (Forecasting)OrganizationGood Judgment (Forecasting)Good Judgment Inc. is a commercial forecasting organization that emerged from successful IARPA research, demonstrating that trained 'superforecasters' can outperform intelligence analysts and predi...Quality: 50/100Future of Humanity InstituteOrganizationFuture of Humanity InstituteThe Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving...Quality: 51/100Machine Intelligence Research InstituteOrganizationMachine Intelligence Research InstituteComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100
Other
Scott AlexanderPersonScott AlexanderScott Alexander is a psychiatrist and writer behind Astral Codex Ten (ACT), one of the most widely-read blogs in the rationalist and effective altruism communities. His writing on AI risk, predicti...Misha YagudinPersonMisha YagudinFounding partner at Arb, a research consultancy that has worked for Open Philanthropy and others. Collaborator with QURI and Nuño Sempere. Met Nuño at an FHI Summer Fellowship.