Oxford University research center focused on existential risks, founded by Nick Bostrom. Closed in April 2024. Website (fhi.ox.ac.uk) no longer available.
AI AlignmentApproachAI AlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) show 75%+ effectiveness on measurable safety metrics for existing systems but face critical scalabi...Quality: 91/100Intervention Evaluation for Political StabilityApproachIntervention Evaluation for Political StabilityA reasonably comprehensive survey of political stability evaluation frameworks, methods, and critiques that touches on AI safety intersections (surveillance, power concentration, disinformation) bu...
Analysis
AI Risk Activation Timeline ModelAnalysisAI Risk Activation Timeline ModelComprehensive framework mapping AI risk activation windows with specific probability assessments: current risks already active (disinformation 95%+, spear phishing active), near-term critical windo...Quality: 66/100AI Safety Researcher Gap ModelAnalysisAI Safety Researcher Gap ModelQuantifies AI safety talent shortage: current 300-800 unfilled positions (30-50% gap) with training pipelines producing only 220-450 researchers annually when 500-1,500 are needed. Projects gaps co...Quality: 67/100Capability-Alignment Race ModelAnalysisCapability-Alignment Race ModelQuantifies the capability-alignment race showing capabilities currently ~3 years ahead of alignment readiness, with gap widening at 0.5 years/year driven by 10²⁶ FLOP scaling vs. 15% interpretabili...Quality: 62/100Anthropic IPOAnalysisAnthropic IPOAnthropic is actively preparing for a potential 2026 IPO with concrete steps like hiring Wilson Sonsini and conducting bank discussions, though timeline uncertainty remains with prediction markets ...Quality: 65/100
Organizations
GovAIOrganizationGovAIGovAI is an AI policy research organization with ~40-45 staff, funded primarily by Coefficient Giving ($1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and cur...Quality: 43/100FTXOrganizationFTXFTX was a major crypto exchange that collapsed in November 2022 due to fraud, with its AI safety relevance stemming from FTX Future Fund grants to organizations like Anthropic and its ties to the e...Quality: 74/100AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials ($380B valuation, $14B ARR at Series G growing to $19B by March 2026), safety research (Constitutional AI, mechanistic interpretability...Quality: 74/100OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100
Concepts
SuperintelligenceConceptSuperintelligenceAI systems with cognitive abilities vastly exceeding human intelligenceQuality: 92/100
Key Debates
AI Accident Risk CruxesCruxAI Accident Risk CruxesComprehensive survey of AI safety researcher disagreements on accident risks, quantifying probability ranges for mesa-optimization (15-55%), deceptive alignment (15-50%), and P(doom) (5-35% median ...Quality: 67/100Is AI Existential Risk Real?CruxIs AI Existential Risk Real?Covers the foundational AI x-risk debate across four core cruxes: instrumental convergence, warning sign availability, corrigibility achievability, and timeline urgency. Incorporates quantitative e...Quality: 12/100The Case For AI Existential RiskArgumentThe Case For AI Existential RiskComprehensive formal argument that AI poses 5-14% median extinction risk by 2100 (per 2,788 researcher survey), structured around four premises: capabilities will advance, alignment is hard (with d...Quality: 66/100The Case Against AI Existential RiskArgumentThe Case Against AI Existential RiskComprehensive synthesis of skeptical arguments against AI x-risk from prominent researchers (LeCun, Marcus, Ng, Brooks), concluding x-risk probability is <5% (likely ~2%) based on challenges to sca...Quality: 58/100
Other
Nick BecksteadPersonNick BecksteadNick Beckstead is a philosopher and EA/longtermism figure whose 2013 dissertation formalized longtermist ethics; this article covers his career arc from academic philosopher to Coefficient Giving (...Quality: 60/100CorrigibilityResearch AreaCorrigibilityComprehensive review of corrigibility research showing fundamental tensions between goal-directed behavior and shutdown compliance remain unsolved after 10+ years, with 2024-25 empirical evidence r...Quality: 59/100
Risks
AI DisinformationRiskAI DisinformationPost-2024 analysis shows AI disinformation had limited immediate electoral impact (cheap fakes used 7x more than AI content), but creates concerning long-term epistemic erosion with 82% higher beli...Quality: 54/100AI Knowledge MonopolyRiskAI Knowledge MonopolyAnalyzes the risk that 2-3 AI systems could dominate humanity's knowledge access by 2040, projecting 80%+ market concentration with correlated errors and epistemic lock-in. Provides comprehensive m...Quality: 50/100
Historical
Deep Learning Revolution EraHistoricalDeep Learning Revolution EraComprehensive timeline documenting 2012-2020 AI capability breakthroughs (AlexNet, AlphaGo, GPT-3) and parallel safety field development, with quantified metrics showing capabilities funding outpac...Quality: 44/100