Press Release: AI Risk - Center for AI Safety
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Center for AI Safety
This May 2023 statement is a widely cited public declaration that helped legitimize AI existential risk as a serious concern among mainstream researchers and policymakers, often referenced in discussions about the urgency of AI safety work.
Metadata
Summary
A landmark press release from the Center for AI Safety featuring a concise one-sentence statement signed by hundreds of leading AI researchers and public figures, asserting that mitigating the risk of AI-caused extinction should be a global priority alongside other civilizational-scale risks. The statement drew broad signatories including Geoffrey Hinton, Yoshua Bengio, Sam Altman, and Demis Hassabis, lending significant institutional credibility to AI existential risk concerns.
Key Points
- •Features a single-sentence consensus statement: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.'
- •Garnered signatures from hundreds of prominent AI researchers, executives, and public intellectuals across academia and industry.
- •Represented a watershed moment in mainstream acknowledgment of AI existential risk by top practitioners in the field.
- •Published in May 2023, contributing to growing public and policy discourse around frontier AI risks.
- •Organized by the Center for AI Safety, a nonprofit focused on reducing societal-scale risks from advanced AI.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Impacts | Organization | 53.0 |
Cached Content Preview
[CAIS 2024 Impact Report\\
\\
CAIS 2024 Impact Report](https://safe.ai/work/impact-report/2024)
### Top AI Scientists Warn: Risk of Extinction from AI on Scale with Nuclear War
San Francisco, CA – Distinguished AI scientists, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, and leaders of the major AI labs, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind, have signed a [single-sentence statement](https://www.safe.ai/statement-on-ai-risk) from the [Center for AI Safety](https://www.safe.ai/) that reads:
> _“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”_
This represents a historic coalition of AI experts — along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists — establishing the risk of extinction from advanced, future AI systems as one of the world’s most important problems. The statement affirms growing public sentiment: a recent poll [found that 61 percent of Americans](https://www.reuters.com/technology/ai-threatens-humanitys-future-61-americans-say-reutersipsos-2023-05-17/) believe AI threatens humanity’s future.
The increasing concern about the potential impacts of AI is reminiscent of early discussions about atomic energy. “We knew the world would not be the same,” J. Robert Oppenheimer once recounted. He later called for international coordination to avoid nuclear war. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” said Dan Hendrycks, Director of the Center for AI Safety.
It’s crucial that the negative impacts of AI that are already being felt across the world are addressed. We must also have the foresight to anticipate the risks posed by more advanced AI systems. “Pandemics were not on the public’s radar before COVID-19. It’s not too early to put guardrails in place and set up institutions so that AI risks don’t catch us off guard,” Hendrycks said. “As we grapple with immediate AI risks like malicious use, misinformation, and disempowerment, the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence. Mitigating the risk of extinction from AI will require global action. The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems.”
Signatories were verified through email confirmation or personal contact. The organization that hosts the statement, the Center for AI Safety, is a non-profit with the mission of reducing societal-scale risk from AI through research, field-building, and advocacy. You can learn more about AI safety and the work CAIS is doing at [safe.ai](https://www.safe.ai/) and stay on top of the latest in AI safety news by su
... (truncated, 5 KB total)f5da5cd1dbe08140 | Stable ID: YjA0YjEwZm