Skip to content
Longterm Wiki
Back

Anthropic backs California's SB 53 AI bill

web

Relevant to AI governance discussions as a concrete example of state-level AI safety legislation and industry endorsement; SB 53 represents a shift from voluntary to mandatory safety commitments for frontier AI developers in California.

Metadata

Importance: 62/100news articlenews

Summary

Anthropic became the first major tech company to endorse California's SB 53, a bill proposed by Sen. Scott Wiener that would create the first broad legal requirements for large AI model developers in the US. The bill would mandate safety guidelines, transparency about AI risks, stronger whistleblower protections, and emergency reporting systems, largely codifying existing voluntary commitments made by major AI companies.

Key Points

  • SB 53 would require large AI companies in California to publicly share safety-focused guidelines and procedures for mitigating AI risks
  • The bill strengthens whistleblower protections, creating clearer pathways for employees to report potentially catastrophic AI risks
  • Strictest requirements apply only to AI companies with annual revenues exceeding $500 million building cutting-edge models
  • The bill largely codifies existing voluntary commitments by Anthropic, OpenAI, Google, Meta and others into mandatory legal requirements
  • An emergency reporting system would allow developers and the public to report critical safety incidents related to AI models

Cited by 1 page

PageTypeQuality
AnthropicOrganization74.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202612 KB
IE 11 is not supported. For an optimal experience visit our site on another browser.

Skip to Content

![The Anthropic website on a laptop arranged in New York, US, on Tuesday, Aug. 15, 2023. ](https://media-cldnry.s-nbcnews.com/image/upload/t_fit-560w,f_auto,q_auto:best/rockcms/2024-03/240304-Anthropic-aa-221p-6c6a84.jpg)

The Anthropic website. Gabby Jones / Bloomberg via Getty Images file

Share

[Add NBC News to Google](https://www.google.com/preferences/source?q=nbcnews.com)

![Save hover state](<Base64-Image-Removed>)Savewith a NBCUniversal Profile

Sept. 8, 2025, 6:06 PM EDT

By [Jared Perlo](https://www.nbcnews.com/author/jared-perlo-ncpn1312649)

Artificial intelligence developer Anthropic became the first major tech company Monday to endorse [a California bill](https://www.anthropic.com/news/anthropic-is-endorsing-sb-53) that would regulate the most advanced artificial intelligence models.

![NBC News Icon](<Base64-Image-Removed>)

Subscribe to read this story ad-free

Get unlimited access to ad-free articles and exclusive content.

![arrow](https://media-cldnry.s-nbcnews.com/image/upload/rockcms/2026-01/arrow-f55fa5.svg)

Proposed by state Sen. Scott Wiener, [SB 53](https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53), if passed, would create the first broad legal requirements for large developers of AI models in the United States.

Among other conditions, the bill would require large AI companies offering services in California to create, publicly share and adhere to safety-focused guidelines and procedures stipulating how each company attempts to mitigate risks from AI. The bill would also strengthen whistleblower requirements by creating [stronger pathways](https://www.theguardian.com/technology/article/2024/jul/14/us-financial-watchdog-urged-to-investigate-ndas-at-openai) for employees to flag concerns about severe or [potentially catastrophic risks](https://safe.ai/ai-risk) that might otherwise go unreported.

“With SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety,” Anthropic said in a statement.

The bill would largely codify [existing](https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/) [voluntary commitments made](https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024) by the world’s largest AI companies, emphasizing transparency and attention to risks from advanced AI systems. For example, Anthropic, OpenAI, Google, Meta and other companies have already committed to assessing how their products could be used for nefarious purposes and to lay out mitigations to prevent these threats. Recent research has shown that AI models can help [users execute

... (truncated, 12 KB total)
Resource ID: 3aad6bf4e6bae66e | Stable ID: YTZlM2JjYm