Skip to content
Longterm Wiki
Back

The Direct Institutional Plan | ControlAI

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Control AI

ControlAI is an AI safety advocacy organization; the DIP is their flagship strategic plan for achieving international governance of advanced AI, making it a useful reference for understanding civil society approaches to AI policy and coordination.

Metadata

Importance: 55/100policy briefprimary source

Summary

The Direct Institutional Plan (DIP) is ControlAI's strategic framework for preventing AI-related catastrophe through direct engagement with governments, international bodies, and key institutions. It outlines a concrete policy and governance roadmap aimed at establishing binding international agreements and oversight mechanisms for advanced AI development. The plan focuses on actionable steps to build the institutional infrastructure needed to ensure AI remains safe and under human control.

Key Points

  • Proposes direct engagement with governments and international institutions to establish enforceable AI safety standards and agreements.
  • Focuses on building international coordination mechanisms to prevent a race-to-the-bottom dynamic among AI-developing nations.
  • Outlines specific institutional targets and policy levers to create binding oversight of frontier AI development.
  • Emphasizes urgency, arguing that proactive institutional action is needed before transformative AI systems are deployed.
  • Represents ControlAI's core strategic document linking AI safety concerns to concrete governance interventions.

Cited by 1 page

PageTypeQuality
ControlAIOrganization63.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20266 KB
The Direct Institutional Plan | ControlAI 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 The Direct Institutional Plan

 Keeping Humanity in Control

 AI companies are racing to build Artificial Superintelligence (ASI) - systems more intelligent than all of humanity combined. If ASI is created in the next few years, humanity risks losing control over its future. Top AI scientists, world leaders, and even AI company CEOs themselves warn it could lead to human extinction. 

 Given this, we have a clear imperative: prevent the development of artificial superintelligence and keep humanity in control. 

 We have a plan that anyone can follow to help turn the tide: the Direct Institutional Plan (DIP). It is composed of two straightforward steps:

 Design policies that target ASI development and precursor technologies

 Then, inform every relevant person in the democratic process – not only lawmakers, but also executive branch, civil service, media, civil society, etc –, and convince them to take a stance on these policies.

 This plan is simple, obvious even, and that is the point. It is the most direct way imaginable to tackle the problem, going through the very institutions that power our societies. 

 That no one has genuinely tried it is a massive failure of civic engagement that should be rectified as soon as possible.

 The DIP offers a clear path to solving the problem of superintelligence, one that follows the way civilizational problems are best solved: awareness, civic engagement, and applying to AI the standards of high-risk industries.

 We have secured public support for our campaigns from high-ranking politicians, have authored draft bills for the UK and US, have created multiple viral videos, and have led international coalitions.

 Laying the Groundwork

 We have spent the last few months laying the groundwork for everyone to be able to participate in the DIP.

 First, we wrote A Narrow Path , developing concrete policy measures to tackle superintelligence and keep humanity durably in control. We also developed country-specific policy briefs on the policies that can be implemented at the national level immediately. These include:

 Banning the deliberate development of superintelligence

 Prohibiting dangerous AI capabilities and superintelligence precursors such as automated AI research and hacking

 Requiring companies to demonstrate that an AI will not use forbidden capabilities before they run it

 A licensing system for advanced AI development

 Every single country committing to take action makes it more likely ASI development is restricted globally.

 Then, we launched a pilot campaign focused on UK lawmakers that validated our approach. In less than three months, over 20 cross-party UK parliamentarians publicly supported our campaign. This amounts to more than one in three lawmakers we brief recognizing extinction risks from AI and supporting binding regulation for the m

... (truncated, 6 KB total)
Resource ID: 1131a633492265eb | Stable ID: ODM3OWNmND