Back
ControlAI Past Campaigns
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Control AI
ControlAI is an advocacy organization working on AI risk reduction; this page tracks their past campaigns and may be useful for researchers studying AI safety policy movements or civil society engagement with AI governance.
Metadata
Importance: 30/100homepagereference
Summary
This page documents the historical advocacy campaigns conducted by ControlAI, an organization focused on reducing risks from advanced AI systems. It provides a record of their policy and public awareness initiatives aimed at influencing AI governance and safety measures.
Key Points
- •Catalogs past advocacy and policy campaigns organized by ControlAI on AI safety issues
- •Demonstrates organized civil society efforts to influence AI governance at institutional levels
- •Provides a track record of AI safety activism and public engagement strategies
- •Useful reference for understanding how AI safety concerns have been translated into advocacy action
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| ControlAI | Organization | 63.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20263 KB
# Our Campaigns
Our current campaign focuses on superintelligence.
There is a simple truth - humanity’s extinction is possible. Recent history has also shown us another truth - we can create artificial intelligence (AI) that can rival humanity.
Under our control, such advanced AI presents one of the greatest opportunities to our collective advancement. With the right approach this technology could be more revolutionary than the creation of the internet and have greater economic impact than the Industrial Revolution. With the wrong approach this technology could be more disruptive and dangerous to life on earth than anything before it, and at its worst, could risk our extinction.
Our latest project is " [The Direct Institutional Plan](https://controlai.com/dip)", you can read more about it [here.](https://controlai.com/dip)
[View Current Campaign](https://controlai.com/dip)
# Past Campaigns
[MAR 2025 - PRESENT\\
\\
**The Direct Institutional Plan**\\
\\
AI companies are racing to build Artificial Superintelligence (ASI) - systems more intelligent than all of humanity combined. If ASI is created in the next few years, humanity risks losing control over its future. Top AI scientists, world leaders, and even AI company CEOs themselves warn it could lead to human extinction.\\
\\
Read More](https://controlai.com/dip) [DEC 2023 - Jun 2024\\
\\
**Campaign against deepfakes**\\
\\
Deepfakes are a growing threat to society, and governments must act.\\
\\
Read More](https://controlai.com/past-campaigns/campaign-against-deepfakes) [NOV - DEC 2023\\
\\
**Campaign against exemptions for Foundation Models in the EU AI Act**\\
\\
In December 2023 the European Parliament settled on an EU AI Act that placed special regulations upon foundation models that have been trained with computational resources beyond a certain threshold.\\
\\
Read More](https://controlai.com/past-campaigns/campaign-against-foundation-models) [OCT 2023\\
\\
**Campaign to prevent an international endorsement of further scaling**\\
\\
At the AI Safety Summit, we successfully campaigned against the Summit formally giving its approval to Responsible Scaling Policies.\\
\\
Read More](https://controlai.com/past-campaigns/campaign-against-scaling)
Get Updates
Sign up to our newsletter if you'd like to stay updated on our work,
how you can get involved, and to receive a weekly roundup of the latest AI news.
Join the Campaign
[](https://controlai.com/)
### [Socials](https://controlai.com/contact\#socials)
### [Blog](https://controlai.com/blog)
### [Past Work](https://controlai.com/past-campaigns)
### [Deepfakes Report](https://drive.google.com/file/d/1IyDi8t0Fw57g9jUrF_G_mCLwyU-EgduE/view?usp=sharing)
### Artificial Guarantees
### [About](https://controlai.com/about)
### [Careers](https://controlai.com/careers)
### [Contact](https://controlai.com/contact)
### [Privacy Pol
... (truncated, 3 KB total)Resource ID:
88974417a76881e1 | Stable ID: OGQzNGZmMD