Skip to content
Longterm Wiki
Back

Senator Wiener Responds to OpenAI Opposition to SB 1047

government

This August 2024 press release is a key document in the SB 1047 legislative debate, reflecting the tension between state-level AI safety regulation and industry opposition, particularly OpenAI's preference for federal over state governance.

Metadata

Importance: 55/100press releaseprimary source

Summary

California Senator Scott Wiener rebuts OpenAI's opposition to SB 1047, noting that OpenAI did not criticize any specific provision of the bill and instead argued for federal action. Wiener defends the bill's requirements for safety evaluations and shutdown capabilities on large AI models, dismissing concerns about companies leaving California and citing national security endorsements from senior defense officials.

Key Points

  • OpenAI's opposition letter did not criticize any specific provision of SB 1047, implicitly acknowledging the reasonableness of its core safety evaluation and shutdown requirements.
  • Wiener rejects the argument that AI regulation should be left to Congress, citing California's successful data privacy law as precedent for state-level action.
  • SB 1047 applies to all companies doing business in California, not just those headquartered there, making the 'companies will leave' argument largely moot.
  • The bill received endorsements from national security figures including Lt. Gen. Jack Shanahan (former DoD JAIC director) and former Asst. Secretary of Defense Andrew Weber.
  • Wiener characterizes SB 1047 as asking AI labs to formalize safety evaluations they have already publicly committed to performing.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 20264 KB
[Skip to main content](https://sd11.senate.ca.gov/news/senator-wiener-responds-openai-opposition-sb-1047#main-content)

Press Release

# Senator Wiener Responds to OpenAI Opposition to SB 1047

August 21, 2024

“What’s notable about the OpenAI letter is that it doesn’t criticize a single provision of the bill. I’m extremely appreciative that OpenAI appears to acknowledge that the bill’s specific core provisions — performing a safety evaluation for foreseeable risks on massively powerful AI models before releasing them and having the ability to shut down a massively powerful model that’s in your possession — are reasonable and implementable. Indeed, OpenAI has repeatedly committed to perform these safety evaluations.

“Instead of criticizing what the bill actually does, OpenAI argues this issue should be left to Congress. As I’ve stated repeatedly, I agree that ideally Congress would handle this. However, Congress has not done so, and we are skeptical Congress will do so. Under OpenAI’s argument about Congress, California never would have passed its data privacy law, and given Congress’s lack of action, Californians would have no protection whatsoever for their data.

“OpenAI also raises national security concerns. Far from undermining national security, SB 1047’s requirements that AI companies thoroughly test their products for the ability to cause catastrophic harm can only strengthen our national security. That is why major national security figures — Lieutenant General John (Jack) Shanahan (U.S. Air Force) and Andrew Weber, former Assistant Secretary of Defense for Nuclear, Chemical and Biological Defense Programs — have endorsed SB 1047. Attached are their public statements in support of the bill.

“Finally, OpenAI claims that companies will leave California if the bill passes. This tired argument — which the tech industry also made when California passed its data privacy law, with that fear never materializing — makes no sense given that SB 1047 is not limited to companies headquartered in California. Rather, the bill applies to companies doing business in California. As a result, locating outside of California does not avoid compliance with the bill.

“Bottom line: SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk. We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about forseeable AI risks, and it deserves to be enacted.”

**SB 1047 has received support from major national security experts:**

“I’m optimistic about this balanced legislative proposal that picks up where the White House Executive Order on AI left off,” said **Lieutenant General John (Jack) Shanahan**, United States Air Force, Retired. “SB 1047 addresses at the state level the most dangerous, near-term potential risks to civil society and national security in

... (truncated, 4 KB total)
Resource ID: 58474f23da669e75 | Stable ID: txdTuTAcVg