Skip to content
Longterm Wiki
Back

AI Companies Promised the White House to Self-Regulate One Year Ago. What's Changed?

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: MIT Technology Review

Useful for understanding the gap between AI industry voluntary safety commitments and real-world follow-through, relevant to debates about self-regulation vs. mandatory AI governance frameworks.

Metadata

Importance: 45/100news articlenews

Summary

A one-year retrospective examining what has and hasn't changed since major AI companies made voluntary commitments to the White House in July 2023 to self-regulate AI development and deployment. The article assesses progress on safety testing, transparency, and responsible development promises against actual industry behavior. It provides a critical look at the gap between voluntary pledges and substantive accountability.

Key Points

  • In July 2023, major AI companies (OpenAI, Google, Anthropic, Meta, etc.) made voluntary commitments to the White House on AI safety and self-regulation.
  • One year later, implementation of these commitments has been uneven, with limited independent verification mechanisms in place.
  • Companies have developed some safety frameworks (like responsible scaling policies) but enforcement and transparency remain inconsistent.
  • Voluntary self-regulation faces structural challenges: competitive pressures and lack of binding enforcement undermine compliance.
  • The article raises questions about whether voluntary commitments are sufficient or whether binding regulation is needed.

Cited by 1 page

PageTypeQuality
Voluntary AI Safety CommitmentsPolicy91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202631 KB
[Skip to Content](https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/#content)

One year ago, on July 21, 2023, seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft,and OpenAI—committed with the White House to a set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.

These included promises to do things like improve the testing and transparency around AI systems, and share information on potential harms and risks.

On the first anniversary of the voluntary commitments, _MIT Technology Review_ asked the AI companies that signed the commitments for details on their work so far. Their replies show that the tech sector has made some welcome progress, with big caveats.

The voluntary commitments came at a time when generative AI mania was perhaps at its frothiest, with companies racing to launch their own models and make them bigger and better than their competitors’. At the same time, we started to see developments such as fights over [copyright](https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/) and deepfakes. A vocal lobby of influential tech players, such as [Geoffrey Hinton](https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/), had also raised concerns that AI could pose an existential risk to humanity. Suddenly, everyone was talking about the urgent need to make AI _safe_, and regulators everywhere were under pressure to do something about it.

Until very recently, AI development has been a Wild West. Traditionally, the US has been loath to regulate its tech giants, instead relying on them to regulate themselves. The voluntary commitments are a good example of that: they were some of the first prescriptive rules for the AI sector in the US, but they remain voluntary and unenforceable. The White House has since issued an [executive order](https://www.technologyreview.com/2023/10/30/1082678/three-things-to-know-about-the-white-houses-executive-order-on-ai/), which expands on the commitments and also applies to other tech companies and government departments.

“One year on, we see some good practices towards their own products, but \[they’re\] nowhere near where we need them to be in terms of good governance or protection of rights at large,” says Merve Hickok, the president and research director of the Center for AI and Digital Policy, who reviewed the companies’ replies as requested by _MIT Technology Review_. Many of these companies continue to push unsubstantiated claims about their products, such as saying that they can supersede human intelligence and capabilities, adds Hickok.

One trend that emerged from the tech companies’ answers is that companies are doing more  to pursue technical fixes such as red-teaming (in which humans probe AI models for flaws) and watermarks for AI-generated conte

... (truncated, 31 KB total)
Resource ID: c1a25dd9fbd20112 | Stable ID: YTU2YmI5ZT