Back
SaferAI's 2025 assessment
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: TIME
Published in Time, this SaferAI report is a third-party comparative assessment of leading AI labs' safety practices, relevant to governance discussions about industry self-regulation and accountability.
Metadata
Importance: 58/100news articleanalysis
Summary
SaferAI's 2025 evaluation assesses major AI labs (Anthropic, xAI, Meta, OpenAI) on their risk management practices, examining how well they identify, mitigate, and communicate risks from frontier AI systems. The assessment benchmarks labs against safety standards and highlights gaps between stated commitments and actual practices.
Key Points
- •Evaluates Anthropic, xAI, Meta, and OpenAI on structured risk management criteria including transparency, red-teaming, and deployment safeguards.
- •Highlights competitive pressures that may cause labs to deprioritize safety practices in favor of faster capability deployment.
- •Identifies gaps between publicly stated safety commitments and the actual rigor of risk management processes at major labs.
- •Provides a comparative framework useful for policymakers and researchers tracking industry safety norms.
- •Raises coordination concerns about whether voluntary safety standards are sufficient without external accountability mechanisms.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Lab Safety Culture | Approach | 62.0 |
| Multipolar Trap (AI Development) | Risk | 91.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20267 KB
- [Tech](https://time.com/section/tech)
- [AI](https://time.com/tag/ai)
# Top AI Companies Have ‘Unacceptable’ Risk Management, Studies Say
[ADD TIME ON GOOGLE](https://www.google.com/preferences/source?q=https://time.com)
Show me more content from TIME on Google Search

by
[Billy Perrigo](https://time.com/author/billy-perrigo/)
Correspondent
Jul 17, 2025 3:00 AM ET

The logos of Google Gemini, ChatGPT, Microsoft Copilot, Claude by Anthropic, Perplexity, and Bing apps are displayed on the screen of a smartphone.
The logos of Google Gemini, ChatGPT, Microsoft Copilot, Claude by Anthropic, Perplexity, and Bing apps are displayed on the screen of a smartphone.Jaque Silva—NurPhoto/Getty Images

by
[Billy Perrigo](https://time.com/author/billy-perrigo/)
Correspondent
Jul 17, 2025 3:00 AM ET
The world’s leading AI companies have “unacceptable” levels of risk management, and a “striking lack of commitment to many areas of safety,” according to two new studies published Thursday.
The risks of even today’s AI—by the admission of many top companies themselves—could include AI helping bad actors carry out cyberattacks or create bioweapons. Future AI models, top scientists worry, could escape human control altogether.
The studies were carried out by the nonprofits SaferAI and the [Future of Life Institute (FLI)](https://futureoflife.org/index-s25). Each was the [second of its kind](https://time.com/7026972/saferai-study-xai-meta/), in what the groups hope will be a running series that incentivizes top AI companies to improve their practices.
“We want to make it really easy for people to see who is not just talking the talk, but who is also walking the walk,” says Max Tegmark, president of the FLI.
**Read More:** _[Some Top AI Labs Have ‘Very Weak’ Risk Management, Study Finds](https://time.com/7026972/saferai-study-xai-meta/)_
SaferAI [assessed](https://ratings.safer-ai.org/) top AI companies’ risk management protocols (also known as responsible scaling policies) to score each company on its approach to identifying and mitigating AI risks.
No AI company scored better than “weak” in SaferAI’s assessment of their risk management maturity. The highest scorer was Anthropic (35%), followed by OpenAI (33%), Meta (22%), and Google DeepMind (20%). Elon Musk’s xAI scored 18%.
Two companies, Anthropic and Google DeepMind, received lower scores
... (truncated, 7 KB total)Resource ID:
a74d9fdd24d82d24 | Stable ID: ODc5MzFjNz