Skip to content
Longterm Wiki
Back

ControlAI Partners Page

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Control AI

ControlAI is an advocacy organization campaigning for strong international controls on advanced AI; this partners page shows which organizations are affiliated with their efforts, useful for mapping the AI safety policy coalition landscape.

Metadata

Importance: 25/100homepagereference

Summary

This page lists organizational partners supporting the ControlAI campaign, which advocates for international oversight and control of advanced AI development. It serves as a directory of aligned organizations and groups backing efforts to establish governance frameworks for transformative AI systems.

Key Points

  • Lists partner organizations supporting the ControlAI advocacy campaign for AI governance
  • Reflects a coalition-building approach to pushing for international AI oversight and safety measures
  • Indicates the breadth of civil society, research, and advocacy groups aligned with AI risk concerns
  • Part of a broader campaign effort to influence AI policy at national and international levels

Cited by 1 page

PageTypeQuality
ControlAIOrganization63.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20262 KB
# For Strategic Partners

You may be able to do more than a concerned citizen. If you are an influential individual or organization and would like our advice on high-leverage approaches, contact us.

We have extensive experience briefing lawmakers, government officials, as well as industry and civil society leaders.

Below are examples of policies and interventions that we believe are crucial to address extinction risk from AI. They form incremental steps necessary to achieve a world safe from superintelligence.

[Contact Us](mailto:partners@controlai.com)

## Policies and Interventions

### Public Acknowledgement of Risk

For change to happen, AI extinction risk must be in the public spotlight. Officials and authority figures can make this happen by publicly acknowledging the extinction risk from AI, following what experts have done with the Statement on AI Risk.

Implementation: Official government statements, parliamentary declarations, inclusion in national security assessments, and major media cycles.

### Extinction Risk Preparedness

Establish formal government doctrine treating AI extinction risk as a national security priority, similar to pandemic or nuclear threat preparedness.

Components: Risk assessment frameworks, response protocols, and institutional responsibilities for monitoring and mitigation.

### Halting the Development of Superintelligence

Establish a legal prohibition on developing artificial superintelligence systems until adequate security measures and international agreements are in place. This can be done legislatively or via executive decision.

Scope: Research and development activities aimed at creating systems that exceed human cognitive abilities across all domains.

### Mandate Kill Switches

Mandatory emergency shutdown capabilities for advanced AI systems, allowing immediate termination if dangerous behavior is detected.

Requirements: Tamper-proof design, government access, fire drills, and automatic triggers for specified risk scenarios.

### Restrict and Monitor Superintelligence Precursors

Restricting and monitoring the development and deployment of AI capabilities that are direct precursors to superintelligence, even if not superintelligent themselves.

Examples: AIs autonomously developing other AIs, AIs that can improve themselves, AIs that can escape from security containment, large concentrations of computing power.
Resource ID: b2c03c6bc264f335 | Stable ID: NGFhYmFkMz