AI Risk Portfolio Analysis
ai-risk-portfolio-analysis (E12)← Back to pagePath: /knowledge-base/models/ai-risk-portfolio-analysis/
Page Metadata
{
"id": "ai-risk-portfolio-analysis",
"numericId": null,
"path": "/knowledge-base/models/ai-risk-portfolio-analysis/",
"filePath": "knowledge-base/models/ai-risk-portfolio-analysis.mdx",
"title": "AI Risk Portfolio Analysis",
"quality": 64,
"importance": 82,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2025-12-28",
"llmSummary": "Quantitative portfolio framework recommending AI safety resource allocation: 40-70% to misalignment, 15-35% to misuse, 10-25% to structural risks, varying by timeline. Based on 2024 funding analysis ($110-130M total), identifies specific gaps including governance (underfunded by $15-20M), agent safety ($7-12M gap), and international capacity ($11-16M gap).",
"structuredSummary": null,
"description": "A quantitative framework for resource allocation across AI risk categories. Analysis estimates misalignment accounts for 40-70% of existential risk, misuse 15-35%, and structural risks 10-25%, with timeline-dependent recommendations. Based on 2024 funding data ($110-130M total external funding), recommends rebalancing toward governance (currently underfunded by ~$15-20M) and interpretability research.",
"ratings": {
"focus": 8.5,
"novelty": 4.5,
"rigor": 6,
"completeness": 7.5,
"concreteness": 8,
"actionability": 7.5
},
"category": "models",
"subcategory": "analysis-models",
"clusters": [
"ai-safety",
"governance"
],
"metrics": {
"wordCount": 2243,
"tableCount": 24,
"diagramCount": 3,
"internalLinks": 38,
"externalLinks": 14,
"footnoteCount": 0,
"bulletRatio": 0.01,
"sectionCount": 34,
"hasOverview": true,
"structuralScore": 15
},
"suggestedQuality": 100,
"updateFrequency": 90,
"evergreen": true,
"wordCount": 2243,
"unconvertedLinks": [
{
"text": "Longview Philanthropy estimates",
"url": "https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation",
"resourceId": "80125fcaf04609b8",
"resourceTitle": "Overview of AI Safety Funding"
},
{
"text": "AI Impacts survey",
"url": "https://aiimpacts.org/",
"resourceId": "3b9fda03b8be71dc",
"resourceTitle": "AI Impacts 2023"
},
{
"text": "detailed analysis",
"url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
"resourceId": "b1ab921f9cbae109",
"resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
},
{
"text": "Longview Philanthropy Analysis",
"url": "https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation",
"resourceId": "80125fcaf04609b8",
"resourceTitle": "Overview of AI Safety Funding"
},
{
"text": "FLI AI Safety Index",
"url": "https://futureoflife.org/ai-safety-index-summer-2025/",
"resourceId": "df46edd6fa2078d1",
"resourceTitle": "FLI AI Safety Index Summer 2025"
},
{
"text": "Frontier Model Forum",
"url": "https://www.frontiermodelforum.org/ai-safety-fund/",
"resourceId": "6bc74edd147a374b",
"resourceTitle": "AI Safety Fund"
}
],
"unconvertedLinkCount": 6,
"convertedLinkCount": 19,
"backlinkCount": 2,
"redundancy": {
"maxSimilarity": 16,
"similarPages": [
{
"id": "safety-research-value",
"title": "Expected Value of AI Safety Research",
"path": "/knowledge-base/models/safety-research-value/",
"similarity": 16
},
{
"id": "capabilities-to-safety-pipeline",
"title": "Capabilities-to-Safety Pipeline Model",
"path": "/knowledge-base/models/capabilities-to-safety-pipeline/",
"similarity": 14
},
{
"id": "compounding-risks-analysis",
"title": "Compounding Risks Analysis",
"path": "/knowledge-base/models/compounding-risks-analysis/",
"similarity": 14
},
{
"id": "intervention-effectiveness-matrix",
"title": "Intervention Effectiveness Matrix",
"path": "/knowledge-base/models/intervention-effectiveness-matrix/",
"similarity": 14
},
{
"id": "safety-research",
"title": "Safety Research & Resources",
"path": "/knowledge-base/metrics/safety-research/",
"similarity": 13
}
]
}
}Entity Data
{
"id": "ai-risk-portfolio-analysis",
"type": "model",
"title": "AI Risk Portfolio Analysis",
"description": "This framework compares AI risk categories to guide resource allocation. It estimates misalignment accounts for 40-70% of x-risk, misuse 15-35%, and structural risks 10-25%, though all estimates carry ±50% uncertainty.",
"tags": [
"prioritization",
"resource-allocation",
"portfolio",
"strategy",
"comparative-analysis"
],
"relatedEntries": [
{
"id": "compounding-risks-analysis",
"type": "model",
"relationship": "related"
},
{
"id": "flash-dynamics-threshold",
"type": "model",
"relationship": "related"
}
],
"sources": [],
"lastUpdated": "2025-12",
"customFields": [
{
"label": "Model Type",
"value": "Prioritization Framework"
},
{
"label": "Focus",
"value": "Resource Allocation"
},
{
"label": "Key Output",
"value": "Risk magnitude comparisons and allocation recommendations"
}
]
}Canonical Facts (0)
No facts for this entity
External Links
No external links
Backlinks (2)
| id | title | type | relationship |
|---|---|---|---|
| worldview-intervention-mapping | Worldview-Intervention Mapping | model | related |
| intervention-timing-windows | Intervention Timing Windows | model | related |
Frontmatter
{
"title": "AI Risk Portfolio Analysis",
"description": "A quantitative framework for resource allocation across AI risk categories. Analysis estimates misalignment accounts for 40-70% of existential risk, misuse 15-35%, and structural risks 10-25%, with timeline-dependent recommendations. Based on 2024 funding data ($110-130M total external funding), recommends rebalancing toward governance (currently underfunded by ~$15-20M) and interpretability research.",
"sidebar": {
"order": 50
},
"quality": 64,
"lastEdited": "2025-12-28",
"ratings": {
"focus": 8.5,
"novelty": 4.5,
"rigor": 6,
"completeness": 7.5,
"concreteness": 8,
"actionability": 7.5
},
"importance": 82,
"update_frequency": 90,
"llmSummary": "Quantitative portfolio framework recommending AI safety resource allocation: 40-70% to misalignment, 15-35% to misuse, 10-25% to structural risks, varying by timeline. Based on 2024 funding analysis ($110-130M total), identifies specific gaps including governance (underfunded by $15-20M), agent safety ($7-12M gap), and international capacity ($11-16M gap).",
"todos": [
"Complete 'Conceptual Framework' section",
"Complete 'Quantitative Analysis' section (8 placeholders)",
"Complete 'Strategic Importance' section",
"Complete 'Limitations' section (6 placeholders)"
],
"clusters": [
"ai-safety",
"governance"
],
"subcategory": "analysis-models",
"entityType": "model"
}Raw MDX Source
---
title: AI Risk Portfolio Analysis
description: A quantitative framework for resource allocation across AI risk categories. Analysis estimates misalignment accounts for 40-70% of existential risk, misuse 15-35%, and structural risks 10-25%, with timeline-dependent recommendations. Based on 2024 funding data ($110-130M total external funding), recommends rebalancing toward governance (currently underfunded by ~$15-20M) and interpretability research.
sidebar:
order: 50
quality: 64
lastEdited: "2025-12-28"
ratings:
focus: 8.5
novelty: 4.5
rigor: 6
completeness: 7.5
concreteness: 8
actionability: 7.5
importance: 82
update_frequency: 90
llmSummary: "Quantitative portfolio framework recommending AI safety resource allocation: 40-70% to misalignment, 15-35% to misuse, 10-25% to structural risks, varying by timeline. Based on 2024 funding analysis ($110-130M total), identifies specific gaps including governance (underfunded by $15-20M), agent safety ($7-12M gap), and international capacity ($11-16M gap)."
todos:
- Complete 'Conceptual Framework' section
- Complete 'Quantitative Analysis' section (8 placeholders)
- Complete 'Strategic Importance' section
- Complete 'Limitations' section (6 placeholders)
clusters:
- ai-safety
- governance
subcategory: analysis-models
entityType: model
---
import {DataInfoBox, KeyQuestions, Mermaid, R, EntityLink} from '@components/wiki';
<DataInfoBox entityId="E12" ratings={frontmatter.ratings} />
## Overview
This framework provides quantitative estimates for allocating limited resources across AI risk categories. Based on expert surveys and risk assessment methodologies from organizations like <R id="0a17f30e99091ebf">RAND</R> and <R id="f0d95954b449240a"><EntityLink id="E524">Center for Security and Emerging Technology</EntityLink> (CSET)</R>, the analysis estimates misalignment accounts for 40-70% of existential risk, misuse 15-35%, and structural risks 10-25%.
The model draws from <R id="d199149badb220f3">portfolio optimization theory</R> and <R id="d64c91adf6a2e394"><EntityLink id="E521">Coefficient Giving's</EntityLink> cause prioritization framework</R>, addressing the critical question: How should the AI safety community allocate its \$100M+ annual resources across different risk categories? All estimates carry substantial uncertainty (±50% or higher), making the framework's value in relative comparisons rather than precise numbers.
## Risk Assessment Matrix
| Risk Category | X-Risk Share | P(Catastrophe) | Tractability | Neglectedness | Current Allocation |
|--------------|---------------|-----------------|--------------|---------------|-------------------|
| Misalignment | 40-70% | 15-45% | 2.5/5 | 3/5 | ≈50% |
| Misuse | 15-35% | 8-25% | 3.5/5 | 4/5 | ≈25% |
| Structural | 10-25% | 5-15% | 4/5 | 4.5/5 | ≈15% |
| Accidents (non-X) | 5-15% | 20-40% | 4.5/5 | 2.5/5 | ≈10% |
<Aside type="caution" title="Uncertainty Bounds">
These estimates represent informed speculation based on limited data. <R id="ad946fbdfec12e8c">Superforecasters</R> and <R id="38eba87d0a888e2e">AI experts show significant disagreement</R> on these parameters, with confidence intervals often spanning 2-3x.
</Aside>
## Strategic Prioritization Framework
### Expected Value Calculation
The framework applies standard expected value methodology:
$$
\text{Priority Score} = \text{Risk Magnitude} \times \text{P(Success)} \times \text{Neglectedness Multiplier}
$$
| Category | Risk Magnitude | P(Success) | Neglectedness | Priority Score |
|----------|----------------|------------|---------------|----------------|
| Misalignment | 8.5/10 | 0.25 | 0.6 | 1.28 |
| Misuse | 6.0/10 | 0.35 | 0.8 | 1.68 |
| Structural | 4.5/10 | 0.40 | 0.9 | 1.62 |
### Timeline-Dependent Allocation
Resource allocation should vary significantly based on <EntityLink id="E399">AGI timeline beliefs</EntityLink>:
| Timeline Scenario | Misalignment | Misuse | Structural | Rationale |
|------------------|--------------|--------|------------|-----------|
| Short (2-5 years) | 70-80% | 15-20% | 5-10% | Only time for direct alignment work |
| Medium (5-15 years) | 50-60% | 25-30% | 15-20% | Balanced portfolio approach |
| Long (15+ years) | 40-50% | 20-25% | 25-30% | Time for institutional solutions |
<Mermaid chart={`
pie title Current Resource Allocation vs Optimal (Medium Timeline)
"Misalignment (Current 50%)" : 50
"Misalignment (Optimal 55%)" : 55
"Misuse (Current 25%)" : 25
"Misuse (Optimal 27%)" : 27
"Structural (Current 15%)" : 15
"Structural (Optimal 18%)" : 18
`} />
## Marginal Value Analysis
### Current Bottlenecks by Risk Category
| Category | Primary Bottleneck | Marginal $ Value | Saturation Risk | Key Organizations |
|----------|-------------------|------------------|-----------------|------------------|
| Misalignment | Conceptual clarity | High (if skilled) | Medium | <EntityLink id="E202">MIRI</EntityLink>, <EntityLink id="E22">Anthropic</EntityLink> |
| Misuse | Government engagement | Very High | Low | <R id="58f6946af0177ca5">CNAS</R>, <R id="f0d95954b449240a">CSET</R> |
| Structural | Framework development | High | Very Low | <EntityLink id="E153">GovAI</EntityLink>, <EntityLink id="E47">CAIS</EntityLink> |
| Accidents | Implementation gaps | Medium | High | <R id="0e7aef26385afeed">Partnership on AI</R> |
### Funding Landscape Analysis
Based on comprehensive analysis from <EntityLink id="E521">Coefficient Giving</EntityLink>, [Longview Philanthropy estimates](https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation), and [LTFF reporting](https://forum.effectivealtruism.org/posts/pJyCWzevPHsycj4oQ/long-term-future-fund-may-2023-to-march-2024-payout), external AI safety funding reached approximately \$110-130M in 2024:
| Funding Source | 2024 Amount | Share | Key Focus Areas |
|----------------|-------------|-------|-----------------|
| <EntityLink id="E521">Coefficient Giving</EntityLink> | \$63.6M | ≈49% | Technical alignment, evaluations, governance |
| Survival & Flourishing Fund | \$19M+ | ≈15% | Diverse safety research |
| Long-Term Future Fund | \$5.4M | ≈4% | Early-career, small orgs |
| Jaan Tallinn & individual donors | \$20M | ≈15% | Direct grants to researchers |
| Government (US/UK/EU) | \$32.4M | ≈25% | Policy-aligned research |
| Other (foundations, corporate) | \$10-20M | ≈10% | Various |
The breakdown by research area reveals significant concentration in interpretability and evaluations:
| Research Area | 2024 Funding | Share | Trend | Optimal (Medium Timeline) |
|---------------|--------------|-------|-------|---------------------------|
| Interpretability | \$52M | 40% | Growing | 30-35% |
| Evaluations/benchmarking | \$23M | 18% | Rapid growth | 15-20% |
| Constitutional AI/RLHF | \$38M | 29% | Stable | 25-30% |
| Governance/policy | \$18M | 14% | Underfunded | 20-25% |
| Red-teaming | \$15M | 12% | Growing | 10-15% |
| Agent safety | \$8.2M | 6% | Emerging | 10-15% |
<Aside type="note" title="Coefficient Giving Dominance">
<EntityLink id="E521">Coefficient Giving</EntityLink> accounts for nearly 60% of all external AI safety investment, with \$63.6M deployed in 2024. Since 2017, Coefficient Giving has donated approximately \$336M to AI safety (~12% of their total \$2.8B in giving). The median Coefficient Giving AI safety grant is \$257k; the average is \$1.67M.
</Aside>
## Risk Interdependency Network
Rather than independent categories, risks exhibit complex interactions affecting prioritization:
<Mermaid chart={`
flowchart TD
CAP[AI Capabilities] -->|amplifies| MIS[Misalignment Risk]
CAP -->|enables| USE[Misuse Risk]
USE -->|degrades| GOV[Governance Quality]
GOV -->|mitigates| USE
GOV -->|weakly mitigates| MIS
STR[Structural Risks] -->|erodes| GOV
MIS -->|if realized| STR
style CAP fill:#ff9999
style MIS fill:#ffcccc
style USE fill:#ffffcc
style GOV fill:#ccffcc
style STR fill:#ccccff
`} />
### Correlation Matrix
| Risk Pair | Correlation | Implication for Portfolio |
|-----------|-------------|--------------------------|
| Misalignment ↔ Capabilities | +0.8 | High correlation; capabilities research affects risk |
| Misuse ↔ Governance Quality | -0.6 | Good governance significantly reduces misuse |
| Structural ↔ All Others | +0.4 | Structural risks amplify other categories |
## Comparative Assessment Methods
### Expert Survey Results
Multiple surveys reveal substantial disagreement on AI risk magnitude. <R id="38eba87d0a888e2e">AI Impacts 2022 expert survey</R> of 738 AI researchers and the [Conjecture internal survey](https://www.lesswrong.com/posts/kygEPBDrGGoM8rz9a/conjecture-internal-survey-agi-timelines-and-probability-of) provide contrasting perspectives:
| Risk Category | AI Impacts Median | Conjecture Median | Expert Disagreement (IQR) | Notes |
|--------------|-------------------|-------------------|---------------------------|-------|
| Total AI X-risk | 5-10% | 80% | 2-90% | Massive disagreement |
| Misalignment-specific | 25% | 60%+ | 10-50% | Safety org workers higher |
| Misuse (Bio/weapons) | 15% | 30-40% | 5-35% | Growing concern |
| Economic Disruption | 35% | 50%+ | 20-60% | Most consensus |
| Authoritarian Control | 20% | 40% | 8-45% | Underexplored |
<Aside type="tip" title="Interpreting Survey Disagreement">
The [Conjecture survey](https://www.alignmentforum.org/posts/kygEPBDrGGoM8rz9a/conjecture-internal-survey-agi-timelines-and-probability-of) (N=22 AI safety researchers) found no respondent reported less than 10% extinction risk, with a median of 80%. However, this sample has severe selection bias—AI safety researchers self-select for high risk estimates. The [AI Impacts survey](https://aiimpacts.org/) sampled mainstream ML researchers with lower risk estimates but suffered from non-response bias. True uncertainty likely spans 2-50% for catastrophic outcomes.
</Aside>
### Case Study Comparisons
Historical technology risk portfolios provide calibration:
| Technology | Primary Risk Focus | Secondary Risks | Outcome Assessment |
|------------|-------------------|-----------------|-------------------|
| Nuclear weapons | Accident prevention (60%) | Proliferation (40%) | Reasonable allocation |
| Climate change | Mitigation (70%) | Adaptation (30%) | Under-weighted adaptation |
| Internet security | Technical fixes (80%) | Governance (20%) | Under-weighted governance |
Pattern: **Technical communities systematically under-weight governance and structural interventions**.
## Uncertainty Analysis
### Key Cruxes Affecting Allocation
<KeyQuestions
questions={[
"What's the probability of transformative AI by 2030? (affects all allocations)",
"How tractable is technical alignment with current approaches?",
"Does AI lower bioweapons barriers by 10x or 1000x?",
"Are structural risks primarily instrumental or terminal concerns?",
"What's the correlation between AI capability and alignment difficulty?"
]}
/>
### Sensitivity Analysis
| Parameter Change | Effect on Misalignment Priority | Effect on Misuse Priority |
|------------------|--------------------------------|--------------------------|
| Timeline -50% (shorter) | +15-20 percentage points | -5-10 percentage points |
| Alignment tractability +50% | -10-15 percentage points | +5-8 percentage points |
| Bioweapons risk +100% | -5-8 percentage points | +10-15 percentage points |
| Governance effectiveness +50% | -3-5 percentage points | +8-12 percentage points |
## Geographic Distribution of Funding
The AI safety funding landscape shows significant geographic concentration, with implications for portfolio diversification:
| Region | 2024 Funding | Share | Key Organizations | Gap Assessment |
|--------|--------------|-------|-------------------|----------------|
| SF Bay Area | \$48M | 37% | CHAI, MIRI, Anthropic | Well-funded |
| London/Oxford | \$32M | 25% | FHI, DeepMind, GovAI | Well-funded |
| Boston/Cambridge | \$12M | 9% | MIT, Harvard | Growing |
| Washington DC | \$8M | 6% | CSET, CNAS, Brookings | Policy focus |
| Rest of US | \$10M | 8% | Academic dispersed | Moderate |
| Europe (non-UK) | \$8M | 6% | Berlin, Zurich hubs | Underfunded |
| Asia-Pacific | \$4M | 3% | Singapore, Australia | Severely underfunded |
| Rest of World | \$8M | 6% | Various | Very limited |
<Aside type="note" title="Emerging Hubs">
Government initiatives are expanding geographic coverage: Canada's \$12M AI Safety Research Initiative, Australia's \$8.4M Responsible AI Program, and Singapore's \$5.6M AI Ethics Research Fund launched in 2024-2025. These represent opportunities for funding diversification beyond the US/UK axis.
</Aside>
<Mermaid chart={`
flowchart LR
subgraph US["United States (54%)"]
SF[SF Bay Area<br/>37%]
DC[DC<br/>6%]
BOS[Boston<br/>9%]
end
subgraph UK["United Kingdom (25%)"]
LON[London/Oxford<br/>25%]
end
subgraph Other["Other (21%)"]
EU[Europe<br/>6%]
APAC[Asia-Pacific<br/>3%]
ROW[Rest of World<br/>12%]
end
style SF fill:#90EE90
style LON fill:#90EE90
style EU fill:#FFB6C1
style APAC fill:#FF6B6B
style ROW fill:#FFB6C1
`} />
## Implementation Recommendations
### For Major Funders
Based on 2024 funding analysis, specific portfolio rebalancing recommendations:
| Funder Type | Current Allocation | Recommended Shift | Specific Opportunities | Priority |
|-------------|-------------------|-------------------|------------------------|----------|
| <EntityLink id="E521">Coefficient Giving</EntityLink> | 68% evals, 12% interp | +15% governance, +10% agent safety | GovAI expansion, international capacity | High |
| SFF/individual donors | Technical focus | +\$5-10M to neglected areas | Value learning, formal verification | High |
| LTFF | Early career, small orgs | Maintain current portfolio | Continue diversified approach | Medium |
| Government agencies | Policy-aligned research | +\$20-30M to independent oversight | AISI expansion, red-teaming | Very High |
| Tech philanthropists | Varies widely | Coordinate via giving circles | Reduce duplication | Medium |
**Specific Funding Gaps (2025):**
| Gap Area | Current Funding | Optimal | Gap | Recommended Recipients |
|----------|-----------------|---------|-----|------------------------|
| Agent safety | \$8.2M | \$15-20M | \$7-12M | METR, Apollo, academic groups |
| Value alignment theory | \$6.5M | \$12-15M | \$5-9M | MIRI, academic philosophy |
| International capacity | \$4M | \$15-20M | \$11-16M | Non-US/UK hubs |
| Governance research | \$18M | \$25-35M | \$7-17M | GovAI, CSET, Brookings |
| Red-teaming | \$15M | \$20-25M | \$5-10M | Independent evaluators |
### For Research Organizations
**Capability-Building Priorities:**
| Organization Size | Primary Focus | Secondary Focus | Rationale |
|------------------|---------------|-----------------|-----------|
| Large (>50 people) | Maintain current specialization | Add governance capacity | Comparative advantage |
| Medium (10-50 people) | 70% core competency | 30% neglected areas | Diversification benefits |
| Small (\<10 people) | Focus on highest neglectedness | None | Resource constraints |
### For Individual Researchers
Career decision framework based on <R id="ec456e4a78161d43">80,000 Hours methodology</R>:
| Career Stage | If Technical Background | If Policy Background | If Economics/Social Science |
|--------------|------------------------|---------------------|----------------------------|
| Early (0-5 years) | Alignment research | Misuse prevention | Structural risk analysis |
| Mid (5-15 years) | Stay in alignment vs. pivot | Government engagement | Institution design |
| Senior (15+ years) | Research leadership | Policy implementation | Field coordination |
## Current State and Trajectory
### 2024 Funding Landscape
Based on [detailed analysis](https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation) and <EntityLink id="E521">Coefficient Giving</EntityLink> grant data, external AI safety funding has evolved significantly:
| Year | External Funding | Internal Lab Safety | Total (Est.) | Key Developments |
|------|------------------|---------------------|--------------|------------------|
| 2020 | \$40-60M | \$50-100M | \$100-160M | Coefficient Giving ramping up |
| 2021 | \$60-80M | \$100-200M | \$160-280M | Anthropic founded |
| 2022 | \$80-100M | \$200-400M | \$280-500M | ChatGPT launch |
| 2023 | \$90-120M | \$400-600M | \$490-720M | Major lab investment |
| 2024 | \$110-130M | \$500-700M | \$610-830M | Government entry |
<Aside type="caution" title="Internal vs External Funding">
Major AI labs—Anthropic, OpenAI, and DeepMind—invest an estimated \$500M+ combined in internal safety research annually, dwarfing external philanthropic funding. However, internal research may face conflicts of interest with commercial objectives, making external independent funding particularly valuable for governance and red-teaming work.
</Aside>
### Coefficient Giving Technical AI Safety Grants (2024)
[Detailed analysis](https://www.lesswrong.com/posts/adzfKEW98TswZEA6T/brief-analysis-of-op-technical-ai-safety-funding) of <EntityLink id="E521">Coefficient Giving's</EntityLink> \$28M in Technical AI Safety grants reveals:
| Focus Area | Share of CG TAIS | Key Recipients | Assessment |
|------------|------------------|----------------|------------|
| Evaluations/benchmarking | 68% | METR, Apollo, UK AISI | Heavily funded |
| Interpretability | 12% | Anthropic, Redwood | Well-funded |
| Robustness | 8% | Academic groups | Moderate |
| Value alignment | 5% | MIRI, academic | Underfunded |
| Field building | 5% | MATS, training programs | Adequate |
| Other approaches | 2% | Various | Exploratory |
### Projected 2025-2027 Needs
| Scenario | Annual Need | Technical | Governance | Field Building | Rationale |
|----------|-------------|-----------|------------|----------------|-----------|
| Short timelines (2-5y) | \$300-500M | 70% | 20% | 10% | Maximize alignment progress |
| Medium timelines (5-15y) | \$200-350M | 55% | 30% | 15% | Build institutions + research |
| Long timelines (15+y) | \$150-250M | 45% | 35% | 20% | Institutional capacity |
<EntityLink id="E521">Coefficient Giving's</EntityLink> 2025 RFP commits at least \$40M to technical AI safety, with potential for "substantially more depending on application quality." Priority areas marked include agent safety, interpretability, and evaluation methods.
## Key Model Limitations
### What This Framework Doesn't Capture
| Limitation | Impact on Recommendations | Mitigation Strategy |
|------------|---------------------------|-------------------|
| Interaction effects | Under-estimates governance value | Weight structural risks higher |
| Option value | May over-focus on current priorities | Reserve 10-15% for exploration |
| Comparative advantage | Ignores organizational fit | Apply at implementation level |
| Black swan risks | May miss novel risk categories | Regular framework updates |
### Confidence Intervals
| Estimate | 90% Confidence Interval | Source of Uncertainty |
|----------|------------------------|----------------------|
| Misalignment share | 25-80% | Timeline disagreement |
| Current allocation optimality | ±20 percentage points | Tractability estimates |
| Marginal value rankings | Medium confidence | Limited empirical data |
## Sources & Resources
### Funding Data Sources
| Source | Type | Coverage | Update Frequency | URL |
|--------|------|----------|------------------|-----|
| <EntityLink id="E521">Coefficient Giving</EntityLink> Grants Database | Primary | All CG grants | Real-time | openphilanthropy.org |
| [EA Funds LTFF Reports](https://forum.effectivealtruism.org/posts/pJyCWzevPHsycj4oQ/long-term-future-fund-may-2023-to-march-2024-payout) | Primary | LTFF grants | Quarterly | effectivealtruism.org |
| [Longview Philanthropy Analysis](https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation) | Analysis | Landscape overview | Annual | EA Forum |
| [CG Technical Safety Analysis](https://www.lesswrong.com/posts/adzfKEW98TswZEA6T/brief-analysis-of-op-technical-ai-safety-funding) | Analysis | CG TAIS breakdown | Annual | LessWrong |
| <R id="f8f6f3ee55c2babe">Coefficient Giving</R> | Annual reports | Strategy & priorities | Annual | openphilanthropy.org |
### Expert Surveys
| Survey | Sample | Year | Key Finding | Methodology Notes |
|--------|--------|------|-------------|-------------------|
| <R id="38eba87d0a888e2e">Grace et al. (AI Impacts)</R> | 738 ML researchers | 2022 | 5-10% median x-risk | Non-response bias concern |
| [Conjecture Internal Survey](https://www.lesswrong.com/posts/kygEPBDrGGoM8rz9a/conjecture-internal-survey-agi-timelines-and-probability-of) | 22 safety researchers | 2023 | 80% median x-risk | Selection bias (safety workers) |
| [FLI AI Safety Index](https://futureoflife.org/ai-safety-index-summer-2025/) | Expert composite | 2025 | 24 min to midnight | Qualitative assessment |
### Academic Literature
| Category | Key Papers | Organization | Relevance |
|----------|------------|--------------|-----------|
| Portfolio Theory | <R id="b0303ec1db9a1cd0">Markowitz (1952)</R> | University of Chicago | Foundational framework |
| Risk Assessment | <R id="f1043d283b6cf307">Kaplan & Garrick (1981)</R> | UCLA | Risk decomposition |
| AI Risk Surveys | <R id="38eba87d0a888e2e">Grace et al. (2022)</R> | AI Impacts | Expert elicitation |
| [MIT AI Risk Repository](https://airisk.mit.edu/) | MIT | 2024 | Risk taxonomy |
### Policy Organizations
| Organization | Focus Area | Key Resources | 2024 Budget (Est.) |
|-------------|------------|---------------|-------------------|
| <R id="cf5fd74e8db11565">RAND Corporation</R> | Defense applications | National security risk assessments | \$5-10M AI-related |
| <R id="f0d95954b449240a">CSET</R> | Technology policy | AI governance frameworks | \$8-12M |
| <R id="3afc8d3cef185a83">CNAS</R> | Security implications | Military AI analysis | \$3-5M AI-related |
| [Frontier Model Forum](https://www.frontiermodelforum.org/ai-safety-fund/) | Industry coordination | AI Safety Fund (\$10M+) | \$10M+ |
## Related Models
This framework connects with several other analytical models:
- <EntityLink id="E63" label="Compounding Risks Analysis" /> - How risks interact and amplify
- <EntityLink id="E398" label="Critical Uncertainties Framework" /> - Key unknowns affecting strategy
- <EntityLink id="E414">Capability-Alignment Race Model</EntityLink> - Timeline dynamics
- <EntityLink id="E99" /> - Multi-layered risk mitigation