Back
ControlAI Engagement Learnings
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Control AI
ControlAI is an advocacy organization focused on AI risk; this page documents their engagement and outreach learnings, useful for those interested in AI safety advocacy strategy and governance communication.
Metadata
Importance: 30/100blog postanalysis
Summary
A resource from ControlAI documenting lessons learned from their advocacy and engagement efforts around AI safety and governance. It likely covers insights from outreach, policy engagement, and public communications campaigns aimed at advancing AI risk awareness and regulation.
Key Points
- •Documents practical lessons from ControlAI's advocacy and stakeholder engagement activities
- •Likely covers outreach strategies for communicating AI risk to policymakers and the public
- •Reflects on what works and what doesn't in AI safety advocacy campaigns
- •May include insights on coalition-building and influencing AI governance discussions
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| ControlAI | Organization | 63.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202631 KB
# Key learnings from our engagement with lawmakers
_By Leticia Garcia_
Between November 2024 and May 2025, ControlAI met with 84 cross-party UK parliamentarians. Roughly 4 in 10 were MPs, 3 in 10 were Lords, and 2 in 10 represented devolved legislatures: the Welsh Senedd, Scottish Parliament, and Northern Ireland Assembly. We briefed these parliamentarians about the [risk of extinction from AI](https://safe.ai/work/statement-on-ai-risk) that arises from loss of control of advanced AI systems. 1 in 3 lawmakers that we met during this period supported our [campaign](https://controlai.com/statement).
Until recently, civic engagement on AI risk has been largely overlooked. Yet it is now more critical than ever. Despite warnings from Nobel laureates, AI scientists, and CEOs of leading AI companies that artificial intelligence poses an “extinction” threat to humanity, no legislation currently protects the British public. While these risks originate from a handful of companies with the resources to develop advanced AI systems, their impact threatens all of society. The public plays a vital role in determining acceptable levels of risk and demanding appropriate safeguards from lawmakers.
To solve this problem, parliamentarians must first be aware of it. Like much of society, they are still getting up to speed with AI and will remain unaware of the risks unless someone informs them. At ControlAI, we are working to build this common knowledge. You can help too!
To help you get started, we are sharing key learnings from our engagement with parliamentarians during our Superintelligence campaign. This document covers: (i) how parliamentarians typically receive our AI risk briefings; (ii) practical outreach tips; (iii) effective leverage points for discussing AI risks; (iv) recommendations for crafting a compelling pitch; (v) common challenges we've encountered; (vi) key considerations for successful meetings; and (vii) recommended books and media articles that I’ve found helpful.
# (i) Overall reception of our briefings
**Very few parliamentarians are up to date on AI and AI risk:** Around 80–85% of parliamentarians were only somewhat familiar with AI, with their engagement largely limited to occasional use of large language models (LLMs) like ChatGPT for basic tasks (e.g., getting assistance with writing a speech).Their staff were slightly more familiar with AI, but few were well-versed in the broader conversation surrounding it.
**Capacity is the main limiting factor:** MPs typically have 3–5 staffers, many of whom focus primarily on constituency work. Members of devolved legislatures usually have 2–4 staffers, while Peers often have even less support – some have no dedicated staff at all.
As a result, there is rarely anyone on these teams who can dedicate significant time to researching AI. Except for a few staffers with a personal interest in AI, most staffers we spoke to had little or no familiarity with it. While most of those we spoke to expre
... (truncated, 31 KB total)Resource ID:
9adf0f12d4e2f1aa | Stable ID: NjVmYzc1ZD