Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: MIT Technology Review

Relevant to AI safety governance discussions about power concentration and the structural barriers to effective oversight of frontier AI development by a small set of dominant corporations.

Metadata

Importance: 52/100news articlecommentary

Summary

MIT Technology Review article arguing that despite the open-source and democratization rhetoric surrounding AI development, the field is overwhelmingly controlled by a handful of large technology corporations. The piece examines how compute, talent, data, and infrastructure dependencies concentrate AI power in Big Tech, raising concerns about accountability and governance.

Key Points

  • A small number of tech giants (Google, Microsoft, Amazon, Meta) effectively control AI development through dominance of compute, cloud infrastructure, and research talent.
  • Open-source releases and partnerships with startups do not fundamentally shift power away from Big Tech, as dependencies on proprietary infrastructure remain.
  • The concentration of AI ownership poses risks for accountability, democratic governance, and equitable distribution of AI's benefits and harms.
  • Regulatory frameworks have struggled to keep pace with the speed of AI deployment, leaving Big Tech largely unchecked in shaping AI's trajectory.
  • The article challenges the narrative that AI is a broadly distributed technological revolution, framing it instead as a consolidation of corporate power.

Cached Content Preview

HTTP 200Fetched Mar 20, 202617 KB
[Skip to Content](https://www.technologyreview.com/2023/12/05/1084393/make-no-mistake-ai-is-owned-by-big-tech/#content)

Until late November, when the [epic saga of OpenAI’s board breakdown](https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/) unfolded, the casual observer could be forgiven for assuming that the industry around generative AI was a vibrant competitive ecosystem.

But this is not the case—nor has it ever been. And understanding why is fundamental to understanding what AI is, and what threats it poses. **Put simply, in the context of the current paradigm of building larger- and larger-scale AI systems,** [**there is no AI without Big Tech**](https://ainowinstitute.org/2023-landscape) **.** With vanishingly few exceptions, every startup, new entrant, and even AI research lab is dependent on these firms. All rely on the computing infrastructure of Microsoft, Amazon, and Google to train their systems, and on those same firms’ vast consumer market reach to deploy and sell their AI products.

Indeed, many startups simply license and rebrand AI models created and sold by these tech giants or their partner startups. This is because large tech firms have accrued significant advantages over the past decade. Thanks to platform dominance and the self-reinforcing properties of the surveillance business model, they own and control the ingredients necessary to develop and deploy large-scale AI. They also [shape the incentive structures](https://dl.acm.org/doi/10.1145/3488666) for the [field of research and development](https://www.science.org/doi/abs/10.1126/science.ade2420) in AI, defining the technology’s present and future.

The recent OpenAI saga, in which Microsoft exerted its quiet but firm dominance over the “capped profit” entity, provides a powerful demonstration of what we’ve been analyzing for the last half-decade. To wit: those with the money make the rules. And right now, they’re engaged in a race to the bottom, releasing systems before they’re ready in an attempt to retain their dominant position.

Concentrated power isn’t just a problem for markets. Relying on a few unaccountable corporate actors for core infrastructure is a problem for democracy, culture, and individual and collective agency. Without significant intervention, the AI market will only end up rewarding and entrenching the very same companies that reaped the profits of the invasive surveillance business model that has powered the commercial internet, often at the expense of the public.

The Cambridge Analytica scandal was just one among many that exposed this seedy reality. Such concentration also creates single points of failure, which raises real security threats. And Securities and Exchange Commission chair Gary Gensler [has warned](https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac) that having a small number of AI models and actors at the foundation of the AI ecosystem poses systemic risks to the financial 

... (truncated, 17 KB total)
Resource ID: e815621b167035b0 | Stable ID: ZGIzN2FhNz