AI Risk Portfolio Analysis
AI Risk Portfolio Analysis
Quantitative portfolio framework recommending AI safety resource allocation: 40-70% to misalignment, 15-35% to misuse, 10-25% to structural risks, varying by timeline. Based on 2024 funding analysis ($110-130M total), identifies specific gaps including governance (underfunded by $15-20M), agent safety ($7-12M gap), and international capacity ($11-16M gap).
Overview
This framework provides quantitative estimates for allocating limited resources across AI risk categories. Based on expert surveys and risk assessment methodologies from organizations like RAND↗🔗 web★★★★☆RAND CorporationRAND Provides Objective Research Services and Public Policy AnalysisRAND Corporation's homepage serves as an entry point to a large body of policy-relevant research on AI governance, national security, and emerging technology risks, useful as a reference for policymakers and researchers in the AI safety space.RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technolo...governancepolicyai-safetycybersecurity+4Source ↗ and Center for Security and Emerging Technology (CSET)↗🔗 web★★★★☆CSET GeorgetownCSET: AI Market DynamicsCSET is a prominent DC-based think tank whose research on AI governance, compute policy, and geopolitical competition is frequently cited in AI safety and policy discussions; this is their institutional homepage.CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, part...governancepolicyai-safetycoordination+2Source ↗, the analysis estimates misalignment accounts for 40-70% of existential risk, misuse 15-35%, and structural risks 10-25%.
The model draws from portfolio optimization theory↗🔗 web★★★★★Oxford Academic (peer-reviewed)portfolio optimization theoryA financial economics paper on portfolio optimization theory; relevant to AI safety only tangentially as a formal framework for resource allocation and prioritization decisions, but not specifically addressing AI safety topics.This academic paper from the Review of Financial Studies presents foundational theory on portfolio optimization, likely building on Markowitz mean-variance framework. It address...resource-allocationprioritizationevaluationtechnical-safetySource ↗ and Coefficient Giving's cause prioritization framework↗🔗 web★★★★☆Coefficient GivingOpen Philanthropy's cause prioritization frameworkWritten by Open Philanthropy leadership, this piece is significant for the AI safety community as Open Philanthropy is one of the largest funders of AI safety research; strategic shifts described here directly affect which organizations and research directions receive support.A self-critical assessment by Open Philanthropy leadership proposing changes to the organization's cause prioritization framework, resource allocation, and strategic direction. ...ai-safetyexistential-riskprioritizationgovernance+2Source ↗, addressing the critical question: How should the AI safety community allocate its $100M+ annual resources across different risk categories? All estimates carry substantial uncertainty (±50% or higher), making the framework's value in relative comparisons rather than precise numbers.
Risk Assessment Matrix
| Risk Category | X-Risk Share | P(Catastrophe) | Tractability | Neglectedness | Current Allocation |
|---|---|---|---|---|---|
| Misalignment | 40-70% | 15-45% | 2.5/5 | 3/5 | ≈50% |
| Misuse | 15-35% | 8-25% | 3.5/5 | 4/5 | ≈25% |
| Structural | 10-25% | 5-15% | 4/5 | 4.5/5 | ≈15% |
| Accidents (non-X) | 5-15% | 20-40% | 4.5/5 | 2.5/5 | ≈10% |
These estimates represent informed speculation based on limited data. Superforecasters↗🔗 webGood Judgment Open - Forecasting PlatformRelevant to AI safety researchers interested in forecasting AI capability timelines, governance outcomes, and policy developments; the Superforecasting methodology offers lessons for structured reasoning under uncertainty applicable to long-term AI risk assessment.Good Judgment Open is a crowd-sourced forecasting platform where participants predict geopolitical, economic, and technological events, with top performers earning the 'Superfor...governanceevaluationpolicycoordination+1Source ↗ and AI experts show significant disagreement↗🔗 web★★★☆☆AI ImpactsAI experts show significant disagreementThis is the primary source page for the 2022 ESPAI survey by AI Impacts; note the page is outdated and links to an updated wiki version with fuller results, making it a key empirical reference for AI timeline and risk forecasting discussions.The 2022 ESPAI surveyed 738 machine learning researchers (NeurIPS/ICML authors) about AI progress timelines and risks, serving as a replication and update of the 2016 survey. Ke...capabilitiesevaluationai-safetyexistential-risk+3Source ↗ on these parameters, with confidence intervals often spanning 2-3x.
Strategic Prioritization Framework
Expected Value Calculation
The framework applies standard expected value methodology:
| Category | Risk Magnitude | P(Success) | Neglectedness | Priority Score |
|---|---|---|---|---|
| Misalignment | 8.5/10 | 0.25 | 0.6 | 1.28 |
| Misuse | 6.0/10 | 0.35 | 0.8 | 1.68 |
| Structural | 4.5/10 | 0.40 | 0.9 | 1.62 |
Timeline-Dependent Allocation
Resource allocation should vary significantly based on AGI timeline beliefs:
| Timeline Scenario | Misalignment | Misuse | Structural | Rationale |
|---|---|---|---|---|
| Short (2-5 years) | 70-80% | 15-20% | 5-10% | Only time for direct alignment work |
| Medium (5-15 years) | 50-60% | 25-30% | 15-20% | Balanced portfolio approach |
| Long (15+ years) | 40-50% | 20-25% | 25-30% | Time for institutional solutions |
Diagram (loading…)
pie title Current Resource Allocation vs Optimal (Medium Timeline) "Misalignment (Current 50%)" : 50 "Misalignment (Optimal 55%)" : 55 "Misuse (Current 25%)" : 25 "Misuse (Optimal 27%)" : 27 "Structural (Current 15%)" : 15 "Structural (Optimal 18%)" : 18
Marginal Value Analysis
Current Bottlenecks by Risk Category
| Category | Primary Bottleneck | Marginal $ Value | Saturation Risk | Key Organizations |
|---|---|---|---|---|
| Misalignment | Conceptual clarity | High (if skilled) | Medium | MIRI, Anthropic |
| Misuse | Government engagement | Very High | Low | CNAS↗🔗 web★★★★☆CNASCenter for a New American Security (CNAS) - HomepageCNAS is a mainstream national security think tank; relevant to AI safety primarily through its Technology & National Security program covering AI governance and defense AI policy, but not an AI safety-focused organization.CNAS is a Washington D.C.-based national security think tank publishing research on defense, technology policy, economic security, and AI governance. Its Technology & National S...governancepolicyai-safetycapabilities+2Source ↗, CSET↗🔗 web★★★★☆CSET GeorgetownCSET: AI Market DynamicsCSET is a prominent DC-based think tank whose research on AI governance, compute policy, and geopolitical competition is frequently cited in AI safety and policy discussions; this is their institutional homepage.CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, part...governancepolicyai-safetycoordination+2Source ↗ |
| Structural | Framework development | High | Very Low | GovAI, CAIS |
| Accidents | Implementation gaps | Medium | High | Partnership on AI↗🔗 web★★★☆☆Partnership on AIPartnership on AI (PAI) – Multi-Stakeholder AI Governance OrganizationPAI is a major multi-stakeholder governance body relevant to AI safety researchers interested in policy coordination, industry norms, and the institutional landscape surrounding responsible AI deployment.Partnership on AI (PAI) is a nonprofit coalition of AI researchers, civil society organizations, academics, and companies working to develop best practices, conduct research, an...governanceai-safetypolicycoordination+2Source ↗ |
Funding Landscape Analysis
Based on comprehensive analysis from Coefficient Giving, Longview Philanthropy estimates, and LTFF reporting, external AI safety funding reached approximately $110-130M in 2024:
| Funding Source | 2024 Amount | Share | Key Focus Areas |
|---|---|---|---|
| Coefficient Giving | $63.6M | ≈49% | Technical alignment, evaluations, governance |
| Survival & Flourishing Fund | $19M+ | ≈15% | Diverse safety research |
| Long-Term Future Fund | $5.4M | ≈4% | Early-career, small orgs |
| Jaan Tallinn & individual donors | $20M | ≈15% | Direct grants to researchers |
| Government (US/UK/EU) | $32.4M | ≈25% | Policy-aligned research |
| Other (foundations, corporate) | $10-20M | ≈10% | Various |
The breakdown by research area reveals significant concentration in interpretability and evaluations:
| Research Area | 2024 Funding | Share | Trend | Optimal (Medium Timeline) |
|---|---|---|---|---|
| Interpretability | $52M | 40% | Growing | 30-35% |
| Evaluations/benchmarking | $23M | 18% | Rapid growth | 15-20% |
| Constitutional AI/RLHF | $38M | 29% | Stable | 25-30% |
| Governance/policy | $18M | 14% | Underfunded | 20-25% |
| Red-teaming | $15M | 12% | Growing | 10-15% |
| Agent safety | $8.2M | 6% | Emerging | 10-15% |
Coefficient Giving accounts for nearly 60% of all external AI safety investment, with $63.6M deployed in 2024. Since 2017, Coefficient Giving has donated approximately $336M to AI safety (~12% of their total $2.8B in giving). The median Coefficient Giving AI safety grant is $257k; the average is $1.67M.
Risk Interdependency Network
Rather than independent categories, risks exhibit complex interactions affecting prioritization:
Diagram (loading…)
flowchart TD CAP[AI Capabilities] -->|amplifies| MIS[Misalignment Risk] CAP -->|enables| USE[Misuse Risk] USE -->|degrades| GOV[Governance Quality] GOV -->|mitigates| USE GOV -->|weakly mitigates| MIS STR[Structural Risks] -->|erodes| GOV MIS -->|if realized| STR style CAP fill:#ff9999 style MIS fill:#ffcccc style USE fill:#ffffcc style GOV fill:#ccffcc style STR fill:#ccccff
Correlation Matrix
| Risk Pair | Correlation | Implication for Portfolio |
|---|---|---|
| Misalignment ↔ Capabilities | +0.8 | High correlation; capabilities research affects risk |
| Misuse ↔ Governance Quality | -0.6 | Good governance significantly reduces misuse |
| Structural ↔ All Others | +0.4 | Structural risks amplify other categories |
Comparative Assessment Methods
Expert Survey Results
Multiple surveys reveal substantial disagreement on AI risk magnitude. AI Impacts 2022 expert survey↗🔗 web★★★☆☆AI ImpactsAI experts show significant disagreementThis is the primary source page for the 2022 ESPAI survey by AI Impacts; note the page is outdated and links to an updated wiki version with fuller results, making it a key empirical reference for AI timeline and risk forecasting discussions.The 2022 ESPAI surveyed 738 machine learning researchers (NeurIPS/ICML authors) about AI progress timelines and risks, serving as a replication and update of the 2016 survey. Ke...capabilitiesevaluationai-safetyexistential-risk+3Source ↗ of 738 AI researchers and the Conjecture internal survey provide contrasting perspectives:
| Risk Category | AI Impacts Median | Conjecture Median | Expert Disagreement (IQR) | Notes |
|---|---|---|---|---|
| Total AI X-risk | 5-10% | 80% | 2-90% | Massive disagreement |
| Misalignment-specific | 25% | 60%+ | 10-50% | Safety org workers higher |
| Misuse (Bio/weapons) | 15% | 30-40% | 5-35% | Growing concern |
| Economic Disruption | 35% | 50%+ | 20-60% | Most consensus |
| Authoritarian Control | 20% | 40% | 8-45% | Underexplored |
The Conjecture survey (N=22 AI safety researchers) found no respondent reported less than 10% extinction risk, with a median of 80%. However, this sample has severe selection bias—AI safety researchers self-select for high risk estimates. The AI Impacts survey sampled mainstream ML researchers with lower risk estimates but suffered from non-response bias. True uncertainty likely spans 2-50% for catastrophic outcomes.
Case Study Comparisons
Historical technology risk portfolios provide calibration:
| Technology | Primary Risk Focus | Secondary Risks | Outcome Assessment |
|---|---|---|---|
| Nuclear weapons | Accident prevention (60%) | Proliferation (40%) | Reasonable allocation |
| Climate change | Mitigation (70%) | Adaptation (30%) | Under-weighted adaptation |
| Internet security | Technical fixes (80%) | Governance (20%) | Under-weighted governance |
Pattern: Technical communities systematically under-weight governance and structural interventions.
Uncertainty Analysis
Key Cruxes Affecting Allocation
Key Questions
- ?What's the probability of transformative AI by 2030? (affects all allocations)
- ?How tractable is technical alignment with current approaches?
- ?Does AI lower bioweapons barriers by 10x or 1000x?
- ?Are structural risks primarily instrumental or terminal concerns?
- ?What's the correlation between AI capability and alignment difficulty?
Sensitivity Analysis
| Parameter Change | Effect on Misalignment Priority | Effect on Misuse Priority |
|---|---|---|
| Timeline -50% (shorter) | +15-20 percentage points | -5-10 percentage points |
| Alignment tractability +50% | -10-15 percentage points | +5-8 percentage points |
| Bioweapons risk +100% | -5-8 percentage points | +10-15 percentage points |
| Governance effectiveness +50% | -3-5 percentage points | +8-12 percentage points |
Geographic Distribution of Funding
The AI safety funding landscape shows significant geographic concentration, with implications for portfolio diversification:
| Region | 2024 Funding | Share | Key Organizations | Gap Assessment |
|---|---|---|---|---|
| SF Bay Area | $48M | 37% | CHAI, MIRI, Anthropic | Well-funded |
| London/Oxford | $32M | 25% | FHI, DeepMind, GovAI | Well-funded |
| Boston/Cambridge | $12M | 9% | MIT, Harvard | Growing |
| Washington DC | $8M | 6% | CSET, CNAS, Brookings | Policy focus |
| Rest of US | $10M | 8% | Academic dispersed | Moderate |
| Europe (non-UK) | $8M | 6% | Berlin, Zurich hubs | Underfunded |
| Asia-Pacific | $4M | 3% | Singapore, Australia | Severely underfunded |
| Rest of World | $8M | 6% | Various | Very limited |
Government initiatives are expanding geographic coverage: Canada's $12M AI Safety Research Initiative, Australia's $8.4M Responsible AI Program, and Singapore's $5.6M AI Ethics Research Fund launched in 2024-2025. These represent opportunities for funding diversification beyond the US/UK axis.
Diagram (loading…)
flowchart LR
subgraph US["United States (54%)"]
SF[SF Bay Area<br/>37%]
DC[DC<br/>6%]
BOS[Boston<br/>9%]
end
subgraph UK["United Kingdom (25%)"]
LON[London/Oxford<br/>25%]
end
subgraph Other["Other (21%)"]
EU[Europe<br/>6%]
APAC[Asia-Pacific<br/>3%]
ROW[Rest of World<br/>12%]
end
style SF fill:#90EE90
style LON fill:#90EE90
style EU fill:#FFB6C1
style APAC fill:#FF6B6B
style ROW fill:#FFB6C1Implementation Recommendations
For Major Funders
Based on 2024 funding analysis, specific portfolio rebalancing recommendations:
| Funder Type | Current Allocation | Recommended Shift | Specific Opportunities | Priority |
|---|---|---|---|---|
| Coefficient Giving | 68% evals, 12% interp | +15% governance, +10% agent safety | GovAI expansion, international capacity | High |
| SFF/individual donors | Technical focus | +$5-10M to neglected areas | Value learning, formal verification | High |
| LTFF | Early career, small orgs | Maintain current portfolio | Continue diversified approach | Medium |
| Government agencies | Policy-aligned research | +$20-30M to independent oversight | AISI expansion, red-teaming | Very High |
| Tech philanthropists | Varies widely | Coordinate via giving circles | Reduce duplication | Medium |
Specific Funding Gaps (2025):
| Gap Area | Current Funding | Optimal | Gap | Recommended Recipients |
|---|---|---|---|---|
| Agent safety | $8.2M | $15-20M | $7-12M | METR, Apollo, academic groups |
| Value alignment theory | $6.5M | $12-15M | $5-9M | MIRI, academic philosophy |
| International capacity | $4M | $15-20M | $11-16M | Non-US/UK hubs |
| Governance research | $18M | $25-35M | $7-17M | GovAI, CSET, Brookings |
| Red-teaming | $15M | $20-25M | $5-10M | Independent evaluators |
For Research Organizations
Capability-Building Priorities:
| Organization Size | Primary Focus | Secondary Focus | Rationale |
|---|---|---|---|
| Large (>50 people) | Maintain current specialization | Add governance capacity | Comparative advantage |
| Medium (10-50 people) | 70% core competency | 30% neglected areas | Diversification benefits |
| Small (<10 people) | Focus on highest neglectedness | None | Resource constraints |
For Individual Researchers
Career decision framework based on 80,000 Hours methodology↗🔗 web★★★☆☆80,000 Hours80,000 Hours methodology80,000 Hours is a major talent and career funnel into the AI safety ecosystem; useful for understanding how researchers and practitioners are recruited into the field and what career paths are considered high-impact by the effective altruism community.80,000 Hours is a nonprofit that provides research and advice on how to use your career to have the most positive impact on the world's most pressing problems, with significant ...ai-safetyexistential-riskgovernancefield-building+3Source ↗:
| Career Stage | If Technical Background | If Policy Background | If Economics/Social Science |
|---|---|---|---|
| Early (0-5 years) | Alignment research | Misuse prevention | Structural risk analysis |
| Mid (5-15 years) | Stay in alignment vs. pivot | Government engagement | Institution design |
| Senior (15+ years) | Research leadership | Policy implementation | Field coordination |
Current State and Trajectory
2024 Funding Landscape
Based on detailed analysis and Coefficient Giving grant data, external AI safety funding has evolved significantly:
| Year | External Funding | Internal Lab Safety | Total (Est.) | Key Developments |
|---|---|---|---|---|
| 2020 | $40-60M | $50-100M | $100-160M | Coefficient Giving ramping up |
| 2021 | $60-80M | $100-200M | $160-280M | Anthropic founded |
| 2022 | $80-100M | $200-400M | $280-500M | ChatGPT launch |
| 2023 | $90-120M | $400-600M | $490-720M | Major lab investment |
| 2024 | $110-130M | $500-700M | $610-830M | Government entry |
Major AI labs—Anthropic, OpenAI, and DeepMind—invest an estimated $500M+ combined in internal safety research annually, dwarfing external philanthropic funding. However, internal research may face conflicts of interest with commercial objectives, making external independent funding particularly valuable for governance and red-teaming work.
Coefficient Giving Technical AI Safety Grants (2024)
Detailed analysis of Coefficient Giving's $28M in Technical AI Safety grants reveals:
| Focus Area | Share of CG TAIS | Key Recipients | Assessment |
|---|---|---|---|
| Evaluations/benchmarking | 68% | METR, Apollo, UK AISI | Heavily funded |
| Interpretability | 12% | Anthropic, Redwood | Well-funded |
| Robustness | 8% | Academic groups | Moderate |
| Value alignment | 5% | MIRI, academic | Underfunded |
| Field building | 5% | MATS, training programs | Adequate |
| Other approaches | 2% | Various | Exploratory |
Projected 2025-2027 Needs
| Scenario | Annual Need | Technical | Governance | Field Building | Rationale |
|---|---|---|---|---|---|
| Short timelines (2-5y) | $300-500M | 70% | 20% | 10% | Maximize alignment progress |
| Medium timelines (5-15y) | $200-350M | 55% | 30% | 15% | Build institutions + research |
| Long timelines (15+y) | $150-250M | 45% | 35% | 20% | Institutional capacity |
Coefficient Giving's 2025 RFP commits at least $40M to technical AI safety, with potential for "substantially more depending on application quality." Priority areas marked include agent safety, interpretability, and evaluation methods.
Key Model Limitations
What This Framework Doesn't Capture
| Limitation | Impact on Recommendations | Mitigation Strategy |
|---|---|---|
| Interaction effects | Under-estimates governance value | Weight structural risks higher |
| Option value | May over-focus on current priorities | Reserve 10-15% for exploration |
| Comparative advantage | Ignores organizational fit | Apply at implementation level |
| Black swan risks | May miss novel risk categories | Regular framework updates |
Confidence Intervals
| Estimate | 90% Confidence Interval | Source of Uncertainty |
|---|---|---|
| Misalignment share | 25-80% | Timeline disagreement |
| Current allocation optimality | ±20 percentage points | Tractability estimates |
| Marginal value rankings | Medium confidence | Limited empirical data |
Sources & Resources
Funding Data Sources
| Source | Type | Coverage | Update Frequency | URL |
|---|---|---|---|---|
| Coefficient Giving Grants Database | Primary | All CG grants | Real-time | openphilanthropy.org |
| EA Funds LTFF Reports | Primary | LTFF grants | Quarterly | effectivealtruism.org |
| Longview Philanthropy Analysis | Analysis | Landscape overview | Annual | EA Forum |
| CG Technical Safety Analysis | Analysis | CG TAIS breakdown | Annual | LessWrong |
| Coefficient Giving↗🔗 web★★★★☆Coefficient GivingOpen Philanthropy: Potential Risks from Advanced Artificial IntelligenceOpen Philanthropy is a major philanthropic funder whose AI safety grantmaking strategy has significantly shaped the field; this page documents their rationale and scope for those seeking context on funding priorities and institutional perspectives.Open Philanthropy's focus area page on potential risks from advanced AI outlines their strategic grantmaking approach to reducing catastrophic and existential risks from transfo...ai-safetyexistential-riskgovernanceprioritization+5Source ↗ | Annual reports | Strategy & priorities | Annual | openphilanthropy.org |
Expert Surveys
| Survey | Sample | Year | Key Finding | Methodology Notes |
|---|---|---|---|---|
| Grace et al. (AI Impacts)↗🔗 web★★★☆☆AI ImpactsAI experts show significant disagreementThis is the primary source page for the 2022 ESPAI survey by AI Impacts; note the page is outdated and links to an updated wiki version with fuller results, making it a key empirical reference for AI timeline and risk forecasting discussions.The 2022 ESPAI surveyed 738 machine learning researchers (NeurIPS/ICML authors) about AI progress timelines and risks, serving as a replication and update of the 2016 survey. Ke...capabilitiesevaluationai-safetyexistential-risk+3Source ↗ | 738 ML researchers | 2022 | 5-10% median x-risk | Non-response bias concern |
| Conjecture Internal Survey | 22 safety researchers | 2023 | 80% median x-risk | Selection bias (safety workers) |
| FLI AI Safety Index | Expert composite | 2025 | 24 min to midnight | Qualitative assessment |
Academic Literature
| Category | Key Papers | Organization | Relevance |
|---|---|---|---|
| Portfolio Theory | Markowitz (1952)↗📄 paper★★★★☆Wiley Online Library (peer-reviewed)Portfolio Selection (Markowitz, 1952)Included in AI safety knowledge bases as a reference for risk-return tradeoffs and portfolio-style resource allocation frameworks; relevant to prioritization under uncertainty but not directly focused on AI safety.Harry Markowitz's foundational 1952 paper introduces modern portfolio theory, establishing the mathematical framework for optimal portfolio construction by balancing expected re...prioritizationresource-allocationdecision-theoryrisk+2Source ↗ | University of Chicago | Foundational framework |
| Risk Assessment | Kaplan & Garrick (1981)↗📄 paper★★★★☆Wiley Online Library (peer-reviewed)Kaplan & Garrick (1981)Foundational paper establishing formal quantitative framework for risk analysis using probability triplets and Bayes' theorem, providing mathematical foundations for risk assessment methodologies applicable to AI safety risk quantification.Lee R. Abramson (1981)2 citations · Risk AnalysisKaplan and Garrick (1981) propose a foundational quantitative definition of risk based on the concept of a 'set of triplets,' establishing a formal framework for risk analysis. ...prioritizationresource-allocationportfolioSource ↗ | UCLA | Risk decomposition |
| AI Risk Surveys | Grace et al. (2022)↗🔗 web★★★☆☆AI ImpactsAI experts show significant disagreementThis is the primary source page for the 2022 ESPAI survey by AI Impacts; note the page is outdated and links to an updated wiki version with fuller results, making it a key empirical reference for AI timeline and risk forecasting discussions.The 2022 ESPAI surveyed 738 machine learning researchers (NeurIPS/ICML authors) about AI progress timelines and risks, serving as a replication and update of the 2016 survey. Ke...capabilitiesevaluationai-safetyexistential-risk+3Source ↗ | AI Impacts | Expert elicitation |
| MIT AI Risk Repository | MIT | 2024 | Risk taxonomy |
Policy Organizations
| Organization | Focus Area | Key Resources | 2024 Budget (Est.) |
|---|---|---|---|
| RAND Corporation↗🔗 web★★★★☆RAND CorporationRAND: AI and National SecurityRAND is a major U.S. think tank with significant influence on government AI policy; their research often shapes defense and national security AI guidelines, making it a key reference for governance and policy-oriented AI safety work.RAND Corporation's AI research hub covers policy, national security, and governance implications of artificial intelligence. It aggregates reports, analyses, and commentary on A...governancepolicyai-safetyexistential-risk+3Source ↗ | Defense applications | National security risk assessments | $5-10M AI-related |
| CSET↗🔗 web★★★★☆CSET GeorgetownCSET: AI Market DynamicsCSET is a prominent DC-based think tank whose research on AI governance, compute policy, and geopolitical competition is frequently cited in AI safety and policy discussions; this is their institutional homepage.CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, part...governancepolicyai-safetycoordination+2Source ↗ | Technology policy | AI governance frameworks | $8-12M |
| CNAS↗🔗 web★★★★☆CNASTechnology & National Security | CNASCNAS is a prominent U.S. defense-oriented think tank; this program homepage aggregates policy-focused research on AI governance and national security, relevant for understanding how geopolitical competition shapes AI development incentives and regulatory approaches.The Center for a New American Security's Technology and National Security Program produces policy research and recommendations focused on U.S.-China competition in AI, biotechno...governancepolicyai-safetycapabilities+3Source ↗ | Security implications | Military AI analysis | $3-5M AI-related |
| Frontier Model Forum | Industry coordination | AI Safety Fund ($10M+) | $10M+ |
Related Models
This framework connects with several other analytical models:
- Compounding Risks Analysis - How risks interact and amplify
- Critical Uncertainties Framework - Key unknowns affecting strategy
- Capability-Alignment Race Model - Timeline dynamics
- AI Safety Defense in Depth Model - Multi-layered risk mitigation
References
RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technology, governance, and emerging risks. It produces influential studies on AI policy, cybersecurity, and global governance challenges. RAND's work is frequently cited by governments and policymakers worldwide.
Partnership on AI (PAI) is a nonprofit coalition of AI researchers, civil society organizations, academics, and companies working to develop best practices, conduct research, and shape policy around responsible AI development. It brings together diverse stakeholders to address challenges including safety, fairness, transparency, and the societal impacts of AI systems. PAI serves as a coordination hub for cross-sector dialogue on AI governance.
Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progress. They produce empirical analyses and datasets to inform understanding of AI development trajectories and support better decision-making in AI governance and safety.
The 2022 ESPAI surveyed 738 machine learning researchers (NeurIPS/ICML authors) about AI progress timelines and risks, serving as a replication and update of the 2016 survey. Key findings include an aggregate forecast of 50% chance of HLMI by 2059 (37 years from 2022), with significant disagreement among experts about timelines and risks.
The Center for a New American Security's Technology and National Security Program produces policy research and recommendations focused on U.S.-China competition in AI, biotechnology, next-generation communications, and quantum technologies. It aims to help U.S. and allied policymakers maintain technological leadership while managing risks to security and democratic values. The program bridges technology and policy communities to develop actionable governance frameworks.
CNAS is a Washington D.C.-based national security think tank publishing research on defense, technology policy, economic security, and AI governance. Its Technology & National Security program produces policy-relevant work on AI, cybersecurity, and emerging technologies with implications for AI safety and governance.
AI Philanthropy's 2023 report surveys philanthropic funding landscapes related to artificial intelligence, likely covering grant allocations, priority areas, and strategic recommendations for donors interested in AI-related causes including safety. It provides insight into how philanthropic capital is being deployed across AI research, governance, and safety initiatives.
Good Judgment Open is a crowd-sourced forecasting platform where participants predict geopolitical, economic, and technological events, with top performers earning the 'Superforecaster' designation. Founded by Philip Tetlock, whose research demonstrated that structured probabilistic thinking can dramatically improve prediction accuracy. The platform serves as both a competitive forecasting community and a research tool for studying human judgment under uncertainty.
Harry Markowitz's foundational 1952 paper introduces modern portfolio theory, establishing the mathematical framework for optimal portfolio construction by balancing expected return against variance (risk). It introduces the concept of the 'efficient frontier' and demonstrates that diversification can reduce risk without sacrificing expected returns. This work formalized the trade-off between risk and return, revolutionizing financial economics and decision-making under uncertainty.
RAND Corporation's AI research hub covers policy, national security, and governance implications of artificial intelligence. It aggregates reports, analyses, and commentary on AI risks, military applications, and regulatory frameworks from one of the leading U.S. defense and policy think tanks.
This academic paper from the Review of Financial Studies presents foundational theory on portfolio optimization, likely building on Markowitz mean-variance framework. It addresses how investors should allocate resources across assets to optimize risk-return tradeoffs under uncertainty.
A self-critical assessment by Open Philanthropy leadership proposing changes to the organization's cause prioritization framework, resource allocation, and strategic direction. The piece reflects on lessons learned and argues for adjustments in how the organization balances near-term vs. long-term priorities, including AI safety funding. It represents an important public accounting of how a major AI safety funder thinks about portfolio construction.
80,000 Hours is a nonprofit that provides research and advice on how to use your career to have the most positive impact on the world's most pressing problems, with significant focus on AI safety and existential risk. They offer career guides, job boards, and in-depth research on high-priority cause areas and career paths. Their methodology emphasizes earning to give, direct work in high-impact fields, and building career capital.
CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, particularly AI. It produces research on AI policy, workforce, geopolitics, and governance. The content could not be fully extracted, limiting detailed analysis.
Kaplan and Garrick (1981) propose a foundational quantitative definition of risk based on the concept of a 'set of triplets,' establishing a formal framework for risk analysis. The paper extends this definition to incorporate uncertainty and completeness, utilizing Bayes' theorem for probabilistic reasoning. The authors apply their framework to discuss key concepts including relative risk, the relativity of risk across contexts, and the acceptability of risk, providing a mathematical foundation for systematic risk assessment that has become influential in the risk analysis field.
Open Philanthropy's focus area page on potential risks from advanced AI outlines their strategic grantmaking approach to reducing catastrophic and existential risks from transformative AI systems. It explains their reasoning for prioritizing AI safety research, policy work, and field-building as among the most important philanthropic opportunities of our time.
AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and existential risk arguments. It maintains a wiki and blog featuring expert surveys, historical analyses, and structured arguments about transformative AI development. Notable outputs include periodic expert surveys on AI progress timelines.
The Future of Life Institute's AI Safety Index Summer 2025 systematically evaluates leading AI companies on safety practices, finding widespread deficiencies across risk management, transparency, and existential safety planning. Anthropic receives the highest grade of C+, indicating that even the best-performing company falls significantly short of adequate safety standards. The report serves as a comparative benchmark for industry accountability.
The AI Safety Fund (AISF) is a $10 million+ collaborative initiative launched in October 2023 by Anthropic, Google, Microsoft, and OpenAI (via the Frontier Model Forum) along with philanthropic partners to fund independent AI safety and security research. It has distributed two rounds of grants focused on responsible frontier AI development, public safety risk reduction, and standardized third-party capability evaluations. The fund is now directly managed by the Frontier Model Forum following the closure of its original administrator, the Meridian Institute.