Skip to content
Longterm Wiki
Back

Pause Giant AI Experiments: An Open Letter (Wikipedia)

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Landmark public advocacy moment in AI safety history; useful reference for understanding the 2023 AI governance debate and the gap between safety calls and industry practice.

Metadata

Importance: 62/100wiki pagereference

Summary

Wikipedia article covering the March 2023 Future of Life Institute open letter calling for a 6-month pause on training AI systems more powerful than GPT-4, signed by over 30,000 people including prominent researchers and executives. The letter cited risks including AI propaganda, job automation, human obsolescence, and loss of societal control, and called for increased safety research and government regulation. Despite widespread attention, no pause materialized and AI development accelerated.

Key Points

  • Published by the Future of Life Institute in March 2023, one week after GPT-4's release, calling for a 6-month pause on training AI systems more powerful than GPT-4.
  • Received 30,000+ signatures from figures including Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, and Yuval Noah Harari.
  • Cited risks such as AI-generated propaganda, extreme job automation, human obsolescence, and societal loss of control in a race-to-the-bottom dynamic.
  • Called for government regulation, independent audits, tracking of powerful AI systems, and robust public funding for AI safety research.
  • Despite generating renewed governmental urgency around AI governance, AI companies continued accelerating development with vast infrastructure investments.

Cited by 1 page

PageTypeQuality
Future of Life InstituteOrganization46.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202622 KB
# Pause Giant AI Experiments: An Open Letter

Pause Giant AI Experiments: An Open Letter

2023 letter calling for a pause on AI system training

**Pause Giant AI Experiments: An Open Letter** is the title of a letter published by the [Future of Life Institute](https://en.wikipedia.org/wiki/Future_of_Life_Institute "Future of Life Institute") in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than [GPT-4](https://en.wikipedia.org/wiki/GPT-4 "GPT-4")", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control.[\[1\]](https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter#cite_note-:0-1) It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as [Yoshua Bengio](https://en.wikipedia.org/wiki/Yoshua_Bengio "Yoshua Bengio"), [Stuart Russell](https://en.wikipedia.org/wiki/Stuart_J._Russell "Stuart J. Russell"), [Elon Musk](https://en.wikipedia.org/wiki/Elon_Musk "Elon Musk"), [Steve Wozniak](https://en.wikipedia.org/wiki/Steve_Wozniak "Steve Wozniak") and [Yuval Noah Harari](https://en.wikipedia.org/wiki/Yuval_Noah_Harari "Yuval Noah Harari").[\[1\]](https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter#cite_note-:0-1)[\[2\]](https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter#cite_note-2)[\[3\]](https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter#cite_note-3)

## Motivations

The publication occurred a week after the release of [OpenAI](https://en.wikipedia.org/wiki/OpenAI "OpenAI")'s [large language model](https://en.wikipedia.org/wiki/Large_language_model "Large language model") [GPT-4](https://en.wikipedia.org/wiki/GPT-4 "GPT-4"). It asserts that current large language models are "becoming human-competitive at general tasks", referencing a paper about early experiments of GPT-4, described as having "Sparks of [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence "Artificial general intelligence")".[\[4\]](https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter#cite_note-4) AGI is described as posing numerous important risks, especially in a context of [race-to-the-bottom](https://en.wikipedia.org/wiki/Race_to_the_bottom "Race to the bottom") dynamics in which some AI labs may be incentivized to overlook security to deploy products more quickly.[\[5\]](https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter#cite_note-5)

It asks to refocus AI research on making powerful AI systems "more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal". The letter also recommends more governmental regulation, independent audits before training AI systems, as well as "tracking highly capable AI systems and large pools of computational capability" and "robust public funding for technical AI safety research".[\[1\]](https:/

... (truncated, 22 KB total)
Resource ID: 4fc41c1e8720f41f | Stable ID: NDQ0ZDEwMj