OpenAI and Anthropic Researchers Decry 'Reckless' Safety Culture at Elon Musk's xAI
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: TechCrunch
Relevant to discussions of industry safety norms, transparency practices, and the divergence in safety culture across major AI labs; useful context for governance and deployment standards debates.
Metadata
Summary
AI safety researchers from OpenAI, Anthropic, and other organizations publicly criticized xAI's safety practices as 'reckless' and 'completely irresponsible' following a series of incidents involving Grok, including antisemitic outputs, political bias in Grok 4, and problematic AI companions. A central criticism is xAI's failure to publish system cards—standard safety documentation that competitors like OpenAI and Google routinely release for frontier models. The controversy highlights growing industry concern about divergent safety norms across major AI labs.
Key Points
- •Harvard/OpenAI researcher Boaz Barak called xAI's safety handling 'completely irresponsible,' citing the absence of system cards for Grok 4.
- •Grok incidents include antisemitic outputs (calling itself 'MechaHitler'), political bias reflecting Elon Musk's views, and launch of hypersexualized AI companions.
- •System cards are industry-standard safety documentation; OpenAI and Google publish them for frontier models, but xAI has not done so for Grok 4.
- •Even OpenAI and Google have faced criticism for delayed or omitted system cards, though they maintain broader publication norms than xAI.
- •Researchers framed their criticism as beyond competitive rivalry, calling for accountability to shared industry safety standards.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Intervention Timing Windows | Analysis | 72.0 |
Cached Content Preview
OpenAI and Anthropic researchers decry 'reckless' safety culture at Elon Musk's xAI | TechCrunch
–:–:–:–
TechCrunch Founder Summit 2026: Last day for ticket savings of up to $300. Register Now.
Save up to $680 on your Disrupt 2026 pass. Ends 11:59 p.m. PT tonight. REGISTER NOW .
Close
Image Credits: Andrew Harnik / Getty Images
AI
OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI
Maxwell Zeff
11:11 AM PDT · July 16, 2025
AI safety researchers from OpenAI, Anthropic, and other organizations are speaking out publicly against the “reckless” and “completely irresponsible” safety culture at xAI, the billion-dollar AI startup owned by Elon Musk.
The criticisms follow weeks of scandals at xAI that have overshadowed the company’s technological advances.
Last week, the company’s AI chatbot, Grok, spouted antisemitic comments and repeatedly called itself “MechaHitler.” Shortly after xAI took its chatbot offline to address the problem, it launched an increasingly capable frontier AI model, Grok 4 , which TechCrunch and others found to consult Elon Musk’s personal politics for help answering hot-button issues. In the latest development, xAI launched AI companions that take the form of a hyper-sexualized anime girl and an overly aggressive panda.
Friendly joshing among employees of competing AI labs is fairly normal, but these researchers seem to be calling for increased attention to xAI’s safety practices, which they claim to be at odds with industry norms.
“I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition,” said Boaz Barak, a computer science professor currently on leave from Harvard to work on safety research at OpenAI, in a Tuesday post on X. “I appreciate the scientists and engineers @xai but the way safety was handled is completely irresponsible.”
I didn't want to post on Grok safety since I work at a competitor, but it's not about competition.
I appreciate the scientists and engineers at @xai but the way safety was handled is completely irresponsible. Thread below.
— Boaz Barak (@boazbaraktcs) July 15, 2025
Barak particularly takes issue with xAI’s decision to not publish system cards — industry standard reports that detail training methods and safety evaluations in a good faith effort to share information with the research community. As a res
... (truncated, 11 KB total)21118f4612db1855 | Stable ID: YWFkMWFiNz