Back
ControlAI News - Avoiding Extinction with Andrea Miotti
webcontrolai.news·controlai.news/p/avoiding-extinction-with-andrea-miotti
This interview features Andrea Miotti of Conjecture, an AI safety organization, discussing extinction-level risks from AI and policy responses; useful for understanding perspectives of safety-focused researchers active in governance advocacy.
Metadata
Importance: 45/100interviewcommentary
Summary
An interview with Andrea Miotti, co-founder of Conjecture, discussing strategies for avoiding AI-driven extinction risks. The conversation covers the importance of AI safety governance, technical alignment challenges, and policy interventions needed to steer AI development toward safer outcomes.
Key Points
- •Andrea Miotti argues that existential risk from advanced AI is a serious near-term concern requiring urgent action
- •Discusses Conjecture's approach to AI safety research, focusing on both technical alignment and governance strategies
- •Emphasizes the need for international coordination and policy frameworks to manage frontier AI development
- •Explores the tension between AI capabilities advancement and the readiness of safety measures to keep pace
- •Highlights the role of advocacy organizations like ControlAI in shaping public discourse around AI existential risk
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| ControlAI | Organization | 63.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202653 KB
[](https://controlai.news/)
# [ControlAI](https://controlai.news/)
SubscribeSign in
Playback speed
1×
Share post
Share post at current time
Share from 0:00
0:00
/
0:00
25
4
5
## Avoiding Extinction with Andrea Miotti and Connor Leahy
Episode 1: Extinction and what we can do to prevent it
[](https://substack.com/@ctrlai)
[ControlAI](https://substack.com/@ctrlai)
May 10, 2025
25
4
5
Share
Welcome to the first edition of the ControlAI Podcast, hosted by [Max Winga](https://open.substack.com/users/231071383-max-winga?utm_source=mentions)!
In this episode we invited [Andrea Miotti](https://open.substack.com/users/80637210-andrea-miotti?utm_source=mentions), Executive Director of ControlAI, and [Connor Leahy](https://open.substack.com/users/103279776-connor-leahy?utm_source=mentions), CEO of Conjecture, to discuss the extinction threat that AI poses to humanity, and how we can avoid it.
Subscribe
If you'd like to continue the conversation, want to suggest future guests, or have ideas about how we might improve, [join our Discord!](https://discord.com/invite/ptPScqtdc5)
If you find the latest developments in AI concerning and the latest steps towards better AI security exciting then you should let your elected representatives know!
We have tools that make it super quick and easy to contact your lawmakers. It takes less than a minute to do so: [https://controlai.com/take-action](https://controlai.com/take-action)
* * *
**Transcript:**
**Max Winga**: Hello and welcome to the ControlAI Podcast. I'm your host Max Winga, and joining me today are Andrea Miotti, the Director of ControlAI, and Connor Leahy, CEO of Conjecture and Advisor to ControlAI. We are here today to discuss the risk of extinction from artificial intelligence as AI companies race to build AI systems vastly smarter than humans, along with the necessary solutions to this unprecedented threat.
Connor, you've been quite outspoken in your advocacy on the topic of AI extinction risk, appearing on news programs, podcasts, and debates. Can you explain what's going on with AI and why we should be worried about it?
**Connor Leahy:** Absolutely. So AI is the huge topic. All of us hear about it, it feels all day, everyday.
And this is a pretty new development, but the concept of AI isn't itself new, or the concept of intelligence. Intelligence, which is like the ability to solve problems. That's kind of how I like to think about it, is intelligence is the ability to solve more and more complex pr
... (truncated, 53 KB total)Resource ID:
e7f057b242586fca | Stable ID: NzhiNjQ5Nz