Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: MIT Technology Review

Relevant to AI governance discussions about how frontier AI labs are influencing regulatory processes; useful background for understanding the political economy of AI safety policy in the U.S.

Metadata

Importance: 42/100news articlenews

Summary

OpenAI dramatically expanded its lobbying expenditures in 2024, increasing spending nearly sevenfold compared to the prior year. This reflects the company's growing efforts to shape AI-related legislation and regulatory frameworks in Washington, D.C., as policymakers increasingly focus on governing advanced AI systems.

Key Points

  • OpenAI's lobbying spending increased approximately 7x year-over-year, signaling a major strategic shift toward political influence.
  • The surge in lobbying reflects broader industry efforts to shape AI governance, safety regulations, and competition policy.
  • Increased lobbying by AI labs raises questions about whose interests are represented in AI policy formation.
  • The move coincides with growing congressional and executive branch attention to AI regulation and national security implications.
  • Other major AI companies have similarly ramped up lobbying, creating a competitive dynamic in shaping AI policy outcomes.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202615 KB
[Skip to Content](https://www.technologyreview.com/2025/01/21/1110260/openai-ups-its-lobbying-efforts-nearly-seven-fold/#content)

OpenAI spent $1.76 million on government lobbying in 2024 and $510,000 in the last three months of the year alone, according to a new disclosure filed on January 22—a significant jump from 2023, when the company spent just $260,000 on Capitol Hill.

The company also disclosed a new in-house lobbyist, Meghan Dorn, who worked for five years for Senator Lindsey Graham and started at OpenAI in October. The filing also shows activity related to two new pieces of legislation in the final months of the year: the [House’s](https://www.congress.gov/bill/118th-congress/house-bill/9497/text) AI Advancement and Reliability Act, which would set up a government center for AI research, and the [Senate’s](https://www.congress.gov/bill/118th-congress/senate-bill/4178/text) Future of Artificial Intelligence Innovation Act, which would create shared benchmark tests for AI models.

OpenAI did not respond to questions about its lobbying efforts.

But perhaps more important, the disclosure is a clear signal of the company’s arrival as a political player, as its first year of serious lobbying ends and Republican control of Washington begins. While OpenAI’s lobbying spending is still dwarfed by its peers’—Meta tops the list of Big Tech spenders, with more than $24 million in 2024—the uptick comes as it and other AI companies have helped redraw the shape of AI policy.

For the past few years, AI policy has been something like a whack-a-mole response to the risks posed by deepfakes and misinformation. But over the last year, AI companies have started to position the success of the technology as pivotal to national security and American competitiveness, arguing that the government must therefore support the industry’s growth. As a result, OpenAI and others now seem poised to gain access to cheaper energy, lucrative national security contracts, and a more lax regulatory environment that’s unconcerned with the minutiae of AI safety.

While the big players seem more or less aligned on this grand narrative, messy divides on other issues are still threatening to break through the harmony on display at President Trump’s inauguration this week.

AI regulation really began in earnest after ChatGPT launched in November 2022. At that point, “a lot of the conversation was about responsibility,” says Liana Keesing, campaigns manager for technology reform at Issue One, a democracy nonprofit that tracks Big Tech’s influence.

Companies were asked what they’d do about sexually abusive [deepfake images](https://www.technologyreview.com/2022/12/13/1064810/how-it-feels-to-be-sexually-objectified-by-an-ai/) and [election disinformation](https://www.technologyreview.com/2023/05/15/1073019/catching-bad-content-in-the-age-of-ai/). “Sam Altman did a very good job coming in and painting himself early as a supporter of that process,” Keesing says.

OpenAI started its o

... (truncated, 15 KB total)
Resource ID: b87f2415c49e53cb | Stable ID: NGVmY2JhYT