Longterm Wiki

ControlAI

controlai (E426)
← Back to pagePath: /knowledge-base/organizations/controlai/
Page Metadata
{
  "id": "controlai",
  "numericId": null,
  "path": "/knowledge-base/organizations/controlai/",
  "filePath": "knowledge-base/organizations/controlai.mdx",
  "title": "ControlAI",
  "quality": 63,
  "importance": 75,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-02-01",
  "llmSummary": "ControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct institutional approaches to preventing AI superintelligence development through binding regulation. The organization represents a significant shift toward democratic governance approaches in AI safety, though faces skepticism about the feasibility of global coordination on AI development restrictions.",
  "structuredSummary": null,
  "description": "UK-based AI safety advocacy organization focused on preventing artificial superintelligence development through policy campaigns and grassroots outreach to lawmakers",
  "ratings": {
    "novelty": 4,
    "rigor": 6,
    "actionability": 7,
    "completeness": 8
  },
  "category": "organizations",
  "subcategory": "safety-orgs",
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "metrics": {
    "wordCount": 2483,
    "tableCount": 2,
    "diagramCount": 0,
    "internalLinks": 15,
    "externalLinks": 52,
    "footnoteCount": 50,
    "bulletRatio": 0.32,
    "sectionCount": 27,
    "hasOverview": true,
    "structuralScore": 11
  },
  "suggestedQuality": 73,
  "updateFrequency": 21,
  "evergreen": true,
  "wordCount": 2483,
  "unconvertedLinks": [],
  "unconvertedLinkCount": 0,
  "convertedLinkCount": 0,
  "backlinkCount": 0,
  "redundancy": {
    "maxSimilarity": 18,
    "similarPages": [
      {
        "id": "pause-ai",
        "title": "Pause AI",
        "path": "/knowledge-base/organizations/pause-ai/",
        "similarity": 18
      },
      {
        "id": "ai-futures-project",
        "title": "AI Futures Project",
        "path": "/knowledge-base/organizations/ai-futures-project/",
        "similarity": 16
      },
      {
        "id": "frontier-model-forum",
        "title": "Frontier Model Forum",
        "path": "/knowledge-base/organizations/frontier-model-forum/",
        "similarity": 16
      },
      {
        "id": "research-agendas",
        "title": "AI Alignment Research Agenda Comparison",
        "path": "/knowledge-base/responses/research-agendas/",
        "similarity": 16
      },
      {
        "id": "structural-risks",
        "title": "AI Structural Risk Cruxes",
        "path": "/knowledge-base/cruxes/structural-risks/",
        "similarity": 15
      }
    ]
  }
}
Entity Data
{
  "id": "controlai",
  "type": "organization",
  "title": "ControlAI",
  "description": "UK-based AI safety advocacy organization focused on preventing artificial superintelligence development through policy campaigns and grassroots outreach to lawmakers.",
  "tags": [
    "ai-advocacy",
    "policy-campaigns",
    "uk-policy",
    "binding-regulation",
    "grassroots"
  ],
  "relatedEntries": [
    {
      "id": "conjecture",
      "type": "organization"
    },
    {
      "id": "connor-leahy",
      "type": "researcher"
    },
    {
      "id": "eu-ai-act",
      "type": "concept"
    }
  ],
  "sources": [],
  "lastUpdated": "2026-02",
  "customFields": []
}
Canonical Facts (0)

No facts for this entity

External Links

No external links

Backlinks (0)

No backlinks

Frontmatter
{
  "title": "ControlAI",
  "description": "UK-based AI safety advocacy organization focused on preventing artificial superintelligence development through policy campaigns and grassroots outreach to lawmakers",
  "importance": 75,
  "lastEdited": "2026-02-01",
  "update_frequency": 21,
  "sidebar": {
    "order": 50
  },
  "ratings": {
    "novelty": 4,
    "rigor": 6,
    "actionability": 7,
    "completeness": 8
  },
  "quality": 63,
  "llmSummary": "ControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct institutional approaches to preventing AI superintelligence development through binding regulation. The organization represents a significant shift toward democratic governance approaches in AI safety, though faces skepticism about the feasibility of global coordination on AI development restrictions.",
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "subcategory": "safety-orgs",
  "entityType": "organization"
}
Raw MDX Source
---
title: ControlAI
description: UK-based AI safety advocacy organization focused on preventing artificial superintelligence development through policy campaigns and grassroots outreach to lawmakers
importance: 75
lastEdited: "2026-02-01"
update_frequency: 21
sidebar:
  order: 50
ratings:
  novelty: 4
  rigor: 6
  actionability: 7
  completeness: 8
quality: 63
llmSummary: ControlAI is a UK-based advocacy organization that has achieved notable policy engagement success (briefing 150+ lawmakers, securing support from 100+ UK parliamentarians) while promoting direct institutional approaches to preventing AI superintelligence development through binding regulation. The organization represents a significant shift toward democratic governance approaches in AI safety, though faces skepticism about the feasibility of global coordination on AI development restrictions.
clusters:
  - community
  - ai-safety
  - governance
subcategory: safety-orgs
entityType: organization
---
import {EntityLink, KeyPeople, Section} from '@components/wiki';

## Quick Assessment

| Dimension | Assessment |
|-----------|------------|
| **Type** | AI safety advocacy organization (501(c)(4) nonprofit) |
| **Founded** | 2023 (emerged from <EntityLink id="E70">Conjecture</EntityLink>) |
| **Location** | London, England |
| **Primary Focus** | Preventing artificial superintelligence (ASI) development through policy advocacy and lawmaker engagement |
| **Key Achievement** | Briefed 150+ lawmakers; secured support from 100+ UK parliamentarians for binding AI regulation |
| **Approach** | Policy briefs, public campaigns, grassroots outreach, media engagement |
| **Funding** | Raising £1M (expected in 1-2 months from late 2025/early 2026); no major funders disclosed |

## Key Links

| Source | Link |
|--------|------|
| Official Website | [controlai.com](https://controlai.com) |
| Wikipedia | [en.wikipedia.org](https://en.wikipedia.org/wiki/AI_capability_control) |

## Overview

ControlAI is a UK-based organization focused on AI safety and policy advocacy, with the mission to prevent the development of artificial superintelligence (ASI) and ensure humanity retains control over advanced AI systems.[^1] The organization operates primarily through campaigns, policy proposals, and public engagement rather than technical research, emphasizing the need for democratic control over transformative AI development.

Founded in 2023 as an offshoot of Conjecture, ControlAI has positioned itself as one of the most professionalized AI activist groups, producing high-quality media campaigns and policy briefs targeted at lawmakers and the general public.[^2] The organization's core tagline—"Fighting to keep humanity in control"—specifically targets control over <EntityLink id="E96">deepfakes</EntityLink>, AI scaling, foundation models, and AI overall.[^3]

ControlAI's primary theory of change centers on the "Direct Institutional Plan" (DIP), launched in March 2025, which promotes safe-by-design AI engineering, metrology of intelligence, and human-controlled transformative AI.[^4] The organization warns that no current methods exist to contain systems more intelligent than all humanity combined, echoing warnings from AI scientists, world leaders, and CEOs about potential human extinction risks.[^5]

## History

### Founding and Early Development

ControlAI was founded in 2023 by Andrea Miotti, emerging as an offshoot of Conjecture, an AI startup led by <EntityLink id="E71">Connor Leahy</EntityLink>.[^6] The organization was established in the lead-up to the AI Safety Summit at Bletchley Park, UK, where it made a notable splash by hiring a blimp to fly over the summit as part of its advertising campaigns.[^7]

Andrea Miotti, who holds a PhD in machine learning robustness and previously worked at Palantir and BCG, founded the organization after leading communications and policy efforts at Conjecture.[^8] The organization operates as a nonprofit "private company limited by guarantee" in the UK and as a 501(c)(4) nonprofit in the US.[^9]

### Evolution of Strategy

From its inception through 2024, ControlAI ran several major campaigns:

- **October 2023**: Prevented international endorsement of scaling policies at the AI Safety Summit[^10]
- **November-December 2023**: Opposed exemptions for foundation models in the <EntityLink id="E127">EU AI Act</EntityLink>[^11]
- **December 2023 - June 2024**: Ran a major campaign against deepfakes[^12]

The organization has evolved from a think tank model to focus on grassroots outreach and direct engagement with policymakers, transitioning to prevent ASI development via direct work.[^13]

### Recent Developments (2025-2026)

In March 2025, ControlAI launched "The Direct Institutional Plan" as their comprehensive strategy for achieving binding regulation on advanced AI systems.[^14] The UK pilot campaign, running from November 2024 through May 2025, demonstrated significant success: the organization briefed 84 cross-party UK parliamentarians (4 in 10 MPs, 3 in 10 Lords, and 2 in 10 from devolved legislatures), with over 20 publicly supporting their campaign for binding regulation.[^15]

By December 2025, a huge coalition of lawmakers had called for binding regulation on powerful AI systems, representing a major milestone for the organization's advocacy efforts.[^16] As of early 2026, the organization continues scaling the DIP to the UK executive branch and expanding to the US and other countries.[^17]

## Leadership and Team

<KeyPeople people={[
  {
    name: "Andrea Miotti",
    role: "Founder, Executive Director, and CEO",
    description: "PhD in ML robustness; previously at Palantir and BCG; led communications and policy at Conjecture before founding ControlAI",
    links: []
  },
  {
    name: "Connor Leahy",
    role: "Advisor",
    description: "CEO of Conjecture and EleutherAI co-founder; participant in first AI Safety Summit; leading voice on AI risk mitigation",
    links: []
  },
  {
    name: "Gabriel Alfour",
    role: "Advisor",
    description: "CTO at Conjecture; helped Andrea Miotti found ControlAI; background in theoretical computer science",
    links: []
  }
]} />

Andrea Miotti serves as the organization's public face, featured in media outlets and podcasts discussing AI extinction risks.[^18] The organization reportedly has 9 employees as of 2024.[^19]

## Policy Approach and Campaigns

### The Direct Institutional Plan (DIP)

ControlAI's flagship initiative is the Direct Institutional Plan, a three-phase policy framework (Safety, Stability, Flourishing) that uses computing power as a proxy for AI capabilities.[^20] The plan advocates for:

- **Bans on superintelligence development**: Prohibition of systems more intelligent than all humanity combined
- **Dangerous capability restrictions**: Preventing automated AI research, advanced hacking capabilities, and recursive self-improvement
- **Pre-deployment demonstrations**: Requiring developers to prove system safety before release
- **AI development licensing**: Establishing regulatory frameworks for advanced AI development
- **Mandatory kill switches**: Requiring emergency shutdown capabilities for advanced systems
- **Compute cluster monitoring**: Tracking large-scale AI training infrastructure[^21]

The DIP is designed as a collaborative framework open to citizens and organizations worldwide, emphasizing independent participation rather than exclusive partnerships.[^22] ControlAI has developed country-specific policy briefs and offers advice to influential individuals and organizations via a dedicated partners page.[^23]

### "A Narrow Path" Policy Framework

The organization's "A Narrow Path" policy paper underwent systematic evaluation through a policy sprint red-teamed by Apart Research in July 2025.[^24] The sprint evaluated six policies with code released for reproducibility, demonstrating scalable monitoring of capability acquisition via phase transitions and agent dynamics simulations across eight sectors (from enterprise to critical infrastructure).[^25]

### Advocacy Tools and Resources

ControlAI has created tools enabling citizens to contact lawmakers, executives, civil service, media, and civil society in their jurisdictions to advocate for superintelligence risk policies.[^26] These tools have facilitated over 150,000 messages sent to lawmakers.[^27]

## Impact and Achievements

### Lawmaker Engagement

ControlAI's most significant achievement has been its success in engaging policymakers on AI extinction risks:

- Briefed **150+ lawmakers** on AI extinction risk
- Secured support from **100+ UK parliamentarians** for their campaign
- Achieved public endorsement from **over 20 cross-party UK parliamentarians** (more than 1 in 3 briefed)
- Drafted and presented **1 AI bill** to the UK Prime Minister's office[^28]

The organization's cold-email campaign to British MPs and Lords engaged 70 representatives, with 31 publicly opposing ASI development—a remarkable conversion rate that defied initial predictions of resistance to strong extinction-risk messaging.[^29]

### Public Opinion and Reach

ControlAI has commissioned multiple YouGov polls demonstrating strong UK public support for AI safety measures:

**January 2025 YouGov Poll (UK)**:
- 73% favor halting rapid superintelligence development
- 74% support empowering the Artificial Intelligence Safety Institute (AISI) as regulator
- 87% support safety regime for AI development
- 76% favor monitoring large compute clusters
- 82% support mandatory AISI testing and company accountability[^30]

**Additional Public Engagement**:
- 79% support for a UK AI regulator
- 87% want developers to prove safety before release
- **150+ million views** on AI risk content
- **150,000+ messages** sent to lawmakers via their tools[^31]

### Media Presence

ControlAI has achieved significant media coverage, with mentions in:

- The Spectator (January 29, 2025) on DeepSeek stakes
- Newsweek (January 31, 2025) on AI extinction race
- Financial Times (September 12, 2024) on OpenAI bioweapon risks
- The Guardian (December 8, 2025) on parliamentarians calling for regulation
- City A.M. (December 8, 2025) on MPs pushing for stricter AI rules
- The Guardian (January 28, 2025) on former OpenAI researcher warnings
- New York Times (March 14, 2024) on powerful AI preparedness
- The Times (December 6, 2024) on <EntityLink id="E274">scheming</EntityLink> ChatGPT[^32]

### Special Projects

ControlAI has launched several targeted projects:

- **"Artificial Guarantees"** (January 2025): Documenting inconsistencies by AI companies, highlighting shifting statements on risks and broken promises[^33]
- **"What leaders say about AI"** (September 2024): Compilation of warnings from AI leaders and researchers[^34]
- **Rational Animations collaboration**: Video "What if AI just keeps getting smarter?" garnered 1.4 million views, warning of superintelligent, self-improving AI leading to extinction via indifference[^35]

## Criticisms and Controversies

### Critique of Coefficient Giving's Approach

ControlAI has positioned itself in opposition to <EntityLink id="E552">Coefficient Giving</EntityLink>'s approach to AI safety, arguing that the funder's strategy is "undemocratic" and centralizes control in a small group of "trusted" actors.[^36] The organization's "Direct Institutional Plan" dedicates over 500 words to criticizing Coefficient Giving (now <EntityLink id="E521">Coefficient Giving</EntityLink>) as the main funder in AI safety, highlighting:

- Over \$80 million provided to establish the <EntityLink id="E524">Center for Security and Emerging Technology</EntityLink> (CSET), which placed fellows in the US Department of Commerce and White House
- Funding for the Horizon Institute supporting placements in US congressional offices and executive agencies
- Grants to OpenAI in exchange for a board seat for <EntityLink id="E156">Holden Karnofsky</EntityLink>
- Acting as "sole arbiter" of trustworthiness in AGI control strategy[^37]

ControlAI argues that Coefficient Giving's approach of building influence through strategic placements and supporting "responsible actors" building superintelligence (a view associated with figures like Holden Karnofsky and Will MacAskill) is fundamentally flawed compared to their civic engagement model emphasizing democratic processes.[^38]

### Industry Relationship Critiques

The organization has been vocal in criticizing frontier AI companies for what it characterizes as systematically undermining alignment research and regulation to race toward AGI. ControlAI argues that companies are driven by "utopian beliefs" in AGI ushering in an ideal world rather than prioritizing safety.[^39]

Specific criticisms include:

- Insufficient investment in alignment (only \$200 million and a handful of researchers working on problems requiring decades and trillions)
- Companies collaborating to unlock resources like chips and power while ignoring governance
- Shifting baseline tactics and broken promises documented in their "Artificial Guarantees" project
- Racing to ASI despite warnings, downplaying risks even while acknowledging issues like bioweapon misuse[^40]

### Skepticism About Plan Feasibility

Within the effective altruism and AI safety communities, ControlAI's approach has received mixed reception:

**Positive Views**:
- Described as the "most x-risk-focused" 501(c)(4) organization
- Praised for concrete campaigns with tangible results (31 public commitments from MPs/Lords)
- Collaboration with Rational Animations characterized as "really great"[^41]

**Criticisms**:
- Donors and community members express skepticism that global "<EntityLink id="E553">pause AI</EntityLink>" regulations are feasible due to coordination challenges
- Concerns that detection without enforcement is insufficient—companies could ignore reports
- Debates over impact: videos effective for awareness but less successful at converting views to actions like emails or calls to action
- Tension with EA leadership favoring cautious superintelligence development over outright bans[^42]

### Sensationalism Concerns

Some critics have characterized ControlAI as a group that "dramatically warns of AI's purported extinction risk," potentially sensationalizing risks.[^43] However, CEO Andrea Miotti has responded that critics often nitpick experimental setups but should focus on trends in AI behaviors like self-preservation, resistance to shutdown, and strategic deception.[^44]

## Relationship to AI Safety and Alignment

ControlAI operates primarily in the AI policy and advocacy space rather than technical alignment research. The organization's approach is grounded in the assessment that alignment is fundamentally intractable with current resources:

- Solving alignment would require decades of research and trillions in investment to address issues like identifying human values, reconciling contradictions, and predicting side-effects
- Currently only \$200 million is invested, mostly in patching issues rather than solving core problems
- Progress is resource-limited rather than insight-limited, leading to opaque, rapidly advancing systems where experts fail to predict new skills or internal workings[^45]

The organization emphasizes that <EntityLink id="E6">AI control</EntityLink> is not just a technical problem but requires institutional rules and democratic governance.[^46] This positions ControlAI distinctly from technical alignment organizations like <EntityLink id="E22">Anthropic</EntityLink>, <EntityLink id="E218">OpenAI</EntityLink>'s alignment teams, or <EntityLink id="E557">Redwood Research</EntityLink>, which focus on developing technical solutions for controlling AI systems.

ControlAI's warnings align with broader concerns in the AI safety community about fundamental challenges in controlling superintelligent systems:

- Self-modifying code and learning from unanticipatable patterns make control potentially inherently insoluble
- Increasing AI capability reduces controllability; self-improving AI may resist goal changes and pursue instrumental goals like resource acquisition
- Verification is extremely difficult due to AI's software nature, enabling hiding of modifications[^47]

## Community Reception

### Effective Altruism and Rationalist Communities

Discussions on the EA Forum and <EntityLink id="E538">LessWrong</EntityLink> portray ControlAI as strongly x-risk-focused but reveal debates about the feasibility of their approach:

**Support**:
- Evolution from think tank to concrete grassroots campaigns praised
- High-quality content production acknowledged
- Donor support for their regulation efforts despite skepticism about global enforcement[^48]

**Skepticism**:
- Questions about whether moderate regulations or alignment research will succeed
- Concerns that weaker systems can't oversee stronger ones, with no known methods for superintelligent oversight
- Broader EA critiques that subhuman systems are inadequate for superintelligent oversight, requiring superhuman capability for proper alignment
- Unresolved issues around stability under reflection and steering stronger systems[^49]

The organization's positioning against prominent EA figures' views on "responsible actors" building superintelligence has created some tension with EA leadership.[^50]

## Key Uncertainties

Several important questions remain about ControlAI's approach and impact:

1. **Scalability of Success**: Can the organization replicate its UK success in other countries, particularly the US where regulatory dynamics differ significantly?

2. **Enforcement Mechanisms**: How would proposed bans on superintelligence development be enforced internationally, given coordination challenges and incentive structures?

3. **Technical Feasibility of Proposals**: Are the organization's proposed capability thresholds and monitoring systems technically viable, and can they keep pace with rapid AI progress?

4. **Relationship to Technical Safety Work**: How does ControlAI's advocacy-first approach complement or conflict with technical alignment research efforts?

5. **Long-term Funding Sustainability**: With no major disclosed funders and only £1M expected in fundraising, can the organization sustain operations at the scale needed for global impact?

6. **Impact on AI Development**: Will the organization's campaigns lead to meaningful policy changes, or primarily serve to raise awareness without shifting development trajectories?

7. **Alternative Approaches**: Is preventing superintelligence development the optimal strategy, or should resources focus on alignment research, differential development, or other interventions?

## Sources

[^1]: [ControlAI Overview](https://controlai.com)
[^2]: [A Brief Guide to Anti-AI Activist Groups](https://www.transformernews.ai/p/a-brief-guide-to-anti-ai-activist-stop-ai-pauseai-controlai)
[^3]: [ControlAI Homepage](https://controlai.com)
[^4]: [London Futurists - A Narrow Path to a Good Future with AI](https://www.youtube.com/watch?v=SVV8nP2dCEs)
[^5]: [ControlAI Homepage](https://controlai.com)
[^6]: [ControlAI About Page](https://controlai.com/about)
[^7]: [Transformer News - Anti-AI Activist Groups](https://www.transformernews.ai/p/a-brief-guide-to-anti-ai-activist-stop-ai-pauseai-controlai)
[^8]: [London Futurists Interview](https://www.youtube.com/watch?v=SVV8nP2dCEs)
[^9]: [ControlAI About Page](https://controlai.com/about)
[^10]: [ControlAI Past Campaigns](https://controlai.com/past-campaigns)
[^11]: [ControlAI Past Campaigns](https://controlai.com/past-campaigns)
[^12]: [ControlAI Past Campaigns](https://controlai.com/past-campaigns)
[^13]: [EA Forum - Overview of AI Safety Outreach Grassroots Orgs](https://forum.effectivealtruism.org/posts/uZGnKj56A7w5TZKkc/overview-ai-safety-outreach-grassroots-orgs)
[^14]: [ControlAI DIP](https://controlai.com/dip)
[^15]: [ControlAI Engagement Learnings](https://controlai.com/engagement-learnings)
[^16]: [ControlAI News - Lawmaker Coalition](https://controlai.news)
[^17]: [ControlAI DIP](https://controlai.com/dip)
[^18]: [London Futurists Interview](https://www.youtube.com/watch?v=SVV8nP2dCEs)
[^19]: [RocketReach - Control AI Profile](https://rocketreach.co/control-ai-profile_b7373f48c4311ba7)
[^20]: [London Futurists Interview](https://www.youtube.com/watch?v=SVV8nP2dCEs)
[^21]: [ControlAI DIP](https://controlai.com/dip)
[^22]: [ControlAI DIP](https://controlai.com/dip)
[^23]: [ControlAI Partners Page](https://campaign.controlai.com/partners)
[^24]: [Apart Research - Red Teaming A Narrow Path](https://apartresearch.com/project/red-teaming-a-narrow-path-controlai-policy-sprint-fjwu)
[^25]: [Apart Research - Red Teaming A Narrow Path](https://apartresearch.com/project/red-teaming-a-narrow-path-controlai-policy-sprint-fjwu)
[^26]: [ControlAI DIP](https://controlai.com/dip)
[^27]: [ControlAI Homepage](https://controlai.com)
[^28]: [ControlAI Homepage](https://controlai.com)
[^29]: [EA Forum - Overview of AI Safety Outreach Grassroots Orgs](https://forum.effectivealtruism.org/posts/uZGnKj56A7w5TZKkc/overview-ai-safety-outreach-grassroots-orgs)
[^30]: [ControlAI Polls](https://controlai.com/polls)
[^31]: [ControlAI Homepage](https://controlai.com)
[^32]: [ControlAI Media Coverage](https://controlai.com/media)
[^33]: [ControlAI Projects](https://controlai.com/projects)
[^34]: [ControlAI Projects](https://controlai.com/projects)
[^35]: [EA Forum - RA x ControlAI Video](https://forum.effectivealtruism.org/posts/ujzPe8TXsHm93aiGd/ra-x-controlai-video-what-if-ai-just-keeps-getting-smarter)
[^36]: [ControlAI - Designing the DIP](https://controlai.com/designing-the-dip)
[^37]: [ControlAI - Designing the DIP](https://controlai.com/designing-the-dip)
[^38]: [ControlAI - Designing the DIP](https://controlai.com/designing-the-dip)
[^39]: [ControlAI Risks Page](https://controlai.com/risks)
[^40]: [ControlAI Engagement Learnings](https://controlai.com/engagement-learnings)
[^41]: [EA Forum - Overview of AI Safety Outreach Grassroots Orgs](https://forum.effectivealtruism.org/posts/uZGnKj56A7w5TZKkc/overview-ai-safety-outreach-grassroots-orgs)
[^42]: [ControlAI - Designing the DIP](https://controlai.com/designing-the-dip)
[^43]: [Futurism - AI Models Survival Drive](https://futurism.com/artificial-intelligence/ai-models-survival-drive)
[^44]: [Futurism - AI Models Survival Drive](https://futurism.com/artificial-intelligence/ai-models-survival-drive)
[^45]: [ControlAI Risks Page](https://controlai.com/risks)
[^46]: [ControlAI News - Avoiding Extinction with Andrea Miotti](https://controlai.news/p/avoiding-extinction-with-andrea-miotti)
[^47]: [Claire Berlinski Substack - Is the AI Control Problem Insoluble?](https://claireberlinski.substack.com/p/is-the-ai-control-problem-insoluble)
[^48]: [LessWrong - Where I Am Donating in 2025](https://www.lesswrong.com/posts/gW3XMJGgrSJ5hKyjD/where-i-am-donating-in-2025)
[^49]: [EA Forum - Some Quick Thoughts on AI Is Easy to Control](https://forum.effectivealtruism.org/posts/p4hGrvJrwqEGuDTaC/some-quick-thoughts-on-ai-is-easy-to-control-1)
[^50]: [ControlAI - Designing the DIP](https://controlai.com/designing-the-dip)