What's changed since the "pause AI" letter six months ago?
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: MIT Technology Review
A reflective interview piece useful for understanding the political and organizational dynamics around AI governance advocacy in mid-2023, particularly the limits of open letters and the case for regulatory enforcement mechanisms.
Metadata
Summary
MIT Technology Review interviews Max Tegmark six months after the Future of Life Institute's open letter calling for a pause on advanced AI development. While the letter succeeded in shifting the Overton window and normalizing public discussion of existential AI risk, no meaningful U.S. regulation resulted and all major AI companies continued development at full speed. Tegmark argues that only government intervention via FDA-style oversight can create the conditions for an enforceable pause, since no single company can pause unilaterally without competitive disadvantage.
Key Points
- •The pause letter's primary success was mainstreaming existential AI risk discourse, making it safer for researchers and executives to voice concerns publicly.
- •No meaningful U.S. AI regulation was passed in the six months following the letter; all major AI companies continued development unimpeded.
- •Tegmark argues a government-enforced pause is the only viable path, as individual companies cannot pause unilaterally without being outcompeted.
- •Elon Musk's founding of xAI after signing the pause letter is framed as a competitive response to the absence of enforceable regulation.
- •Tegmark warns against three key mistakes: letting tech companies write legislation, framing AI as a US-China race, and siloing current harms vs. existential risk concerns.
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| Elon Musk: Track Record | -- | 66.0 |
| Pause / Moratorium | Concept | 72.0 |
| Pause Advocacy | Approach | 91.0 |
Cached Content Preview
Six months on from the “pause” letter | MIT Technology Review
You need to enable JavaScript to view this site.
Skip to Content This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here .
Last Friday marked six months since the Future of Life Institute (FLI), a nonprofit focusing on existential risks surrounding artificial intelligence, shared an open letter signed by famous people such as Elon Musk, Steve Wozniak, and Yoshua Bengio. The letter calling for tech companies to “pause” the development of AI language models more powerful than OpenAI’s GPT-4 for six months.
Well, that didn’t happen, obviously.
I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take stock of what has happened since. Here are highlights of our conversation.
On shifting the Overton window on AI risk: Tegmark told me that in conversations with AI researchers and tech CEOs, it had become clear that there was a huge amount of anxiety about the existential risk AI poses , but nobody felt they could speak about it openly “for fear of being ridiculed as Luddite scaremongerers.” “The key goal of the letter was to mainstream the conversation, to move the Overton window so that people felt safe expressing these concerns,” he says. “Six months later, it’s clear that part was a success .”
But that’s about it: “What’s not great is that all the companies are still going full steam ahead and we still have no meaningful regulation in America. It looks like US policymakers, for all their talk, aren’t going to pass any laws this year that meaningfully rein in the most dangerous stuff.”
Why the government should step in: Tegmark is lobbying for an FDA-style agency that would enforce rules around AI, and for the government to force tech companies to pause AI development. “It’s also clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very concerned themselves. But they all know they can’t pause alone,” Tegmark says. Pausing alone would be “a disaster for their company, right?” he adds. “They just get outcompeted, and then that CEO will be replaced with someone who doesn’t want to pause. The only way the pause comes about is if the governments of the world step in and put in place safety standards that force everyone to pause.”
So how about Elon … ? Musk signed the letter calling for a pause, only to set up a new AI company called X.AI to build AI systems that would “understand the true nature of the universe.” (Musk is an advisor to the FLI.) “Obviously, he wants a pause just like a lot of other AI leaders. But as long as there isn’t one, he feels he has to also stay in the game.”
Why he thinks tech CEOs have the goodness of humanity in their hearts: “What makes me think that they really want a good future with AI, not a bad one? I’ve known them for many years. I talk with
... (truncated, 10 KB total)1ba1123aa592a983 | Stable ID: MDg5NzUyZT