The Pause Letter: One year later - Future of Life Institute
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Future of Life Institute
Published by the Future of Life Institute, the organization behind the original 2023 Pause Letter; this retrospective is relevant for understanding the AI governance landscape and the gap between stated safety concerns and actual industry/regulatory action.
Metadata
Summary
A one-year retrospective by the Future of Life Institute on their 2023 open letter calling for a six-month pause on AI systems more powerful than GPT-4. The piece assesses regulatory progress made (EU AI Act, Biden Executive Order, Bletchley Declaration) while arguing that AI development remains dangerously ungoverned and that the letter's proposed safety measures are more urgent than ever.
Key Points
- •The original Pause Letter garnered 30,000+ signatures and sparked global debate, but AI labs have since accelerated rather than paused development.
- •Regulatory milestones include the EU AI Act, Biden's Executive Order on AI safety, and the Bletchley Declaration signed by 28 countries.
- •Key risks highlighted include misinformation, economic displacement, bioweapon enablement, and extreme concentration of power.
- •Big Tech continues to lobby against regulation while spending billions on more powerful models, treating safety as an afterthought despite acknowledging risks.
- •The article argues that shared safety protocols and meaningful oversight for advanced AI systems remain critically absent one year later.
Cached Content Preview
[Skip to content](https://futureoflife.org/ai/the-pause-letter-one-year-later/#main-content)
# The Pause Letter: One year later
It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
Published:
March 22, 2024
Author:
Anthony Aguirre

One year ago today, the Future of Life Institute put out an [open letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) that called for a pause of at least six months on “giant AI experiments” – systems more powerful than GPT-4. It was signed by more than 30,000 individuals, including pre-eminent AI experts and industry executives, and made headlines around the world. The letter represented the widespread and rapidly growing concern about the massive risks presented by the out-of-control and unregulated race to develop and deploy increasingly powerful systems.
These risks include an explosion in misinformation and digital impersonation, widespread automation condemning millions to economic disempowerment, enablement of terrorists to build biological and chemical weapons, extreme concentration of power into the hands of a few unelected individuals, and many more. These risks have subsequently been acknowledged by the AI corporations’ leaders themselves in [newspaper interviews](https://time.com/6246119/demis-hassabis-deepmind-interview/), [industry conferences](https://fortune.com/2023/06/08/sam-altman-openai-chatgpt-worries-15-quotes/), [joint statements](https://www.safe.ai/work/statement-on-ai-risk), and [U.S. Senate hearings.](https://www.bloomberg.com/news/articles/2023-07-25/anthropic-s-amodei-warns-us-senators-of-ai-powered-bioweapons)
Despite admitting the danger, aforementioned AI corporations have not paused. If anything they have sped up, with vast investments in infrastructure to train ever-more giant AI systems. At the same time, the last 12 months have seen growing global alarm, and calls for lawmakers to take action. There has been a flurry of regulatory activity. President Biden signed a sweeping [Executive Order](https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/) directing model developers to share their safety test results with the government, and calling for rigorous standards and tools for evaluating systems. The UK held the first global AI Safety Summit, with 28 countries signing the [“Bletchley Declaration”](https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration), committing to cooperate on safe and responsible development of AI. Perhaps most significantly, the European Parliament passed the world’s first comprehensive legal framework in the space – the [EU AI Act](https://time.com/6903563/eu-ai-act-law-aritificial-intellige
... (truncated, 10 KB total)2f4d6cb35b693d85 | Stable ID: NjlkYmFlYz