Back
Most Researchers Do Not Believe AGI Is Imminent. Why Do Policymakers Act Otherwise?
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: TechPolicy.Press
Relevant for wiki users examining how AGI discourse shapes AI policy and whether existential-risk framing reflects broad researcher consensus or is driven by narrower industry interests.
Metadata
Importance: 55/100opinion piececommentary
Summary
This article examines the disconnect between AI researcher consensus—which largely does not view AGI as imminent—and policymakers who treat near-term AGI as a serious planning assumption. It critiques how industry narratives and lobbying may be shaping policy priorities in ways that diverge from mainstream scientific opinion.
Key Points
- •Surveys of AI researchers show most do not believe AGI is imminent, yet policy discussions increasingly treat it as a near-term concern.
- •Industry actors with financial interests in AGI narratives may disproportionately influence policymakers' assumptions and legislative agendas.
- •The gap between researcher consensus and policy framing risks misallocating regulatory attention and resources.
- •Focusing policy on speculative AGI scenarios may crowd out attention to near-term, concrete AI harms already occurring.
- •Policymakers should ground AI governance in empirical expert consensus rather than industry forecasts or speculative timelines.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| The Case Against AI Existential Risk | Argument | 58.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202618 KB
# Most Researchers Do Not Believe AGI Is Imminent. Why Do Policymakers Act Otherwise?
Eryk Salvaggio / Mar 19, 2025
_Eryk Salvaggio is a Fellow at Tech Policy Press._

WASHINGTON, DC - JANUARY 21: US President Donald Trump, accompanied by (R-L) , OpenAI CEO Sam Altman, SoftBank CEO Masayoshi Son, and Oracle co-founder, CTO and Executive Chairman Larry Ellison in the Roosevelt Room of the White House for the announcement of the Stargate project. (Photo by Andrew Harnik/Getty Images)
AI research—these days primarily driven by corporate interests—often embraces strange priorities. Amidst multiple crises in public health, climate, and democracy, we could do better than synthetic image and text generation products and personalized chatbots as the defining technologies of our era.
But the companies hyping these products tell us every improvement demonstrates progress toward an even stranger goal: artificial general intelligence, or AGI. The splashy announcement of any new model is cast as evidence of the inevitable trajectory toward machines that learn and act as humans do. New capabilities are pitched as steps toward the goal of machines that may even outperform humans. Investment in the companies that build these systems is, of course, heavily dependent on this promise.
Around the world, policymakers appear increasingly eager to satisfy the interests of tech firms that claim they can deliver AGI. Perhaps it’s natural—if you were a politician or a head of state confronted with a complex, interconnected set of problems with no immediate solution, you might crave the answer these companies are selling. And you might be more than a little hungry for the type of transformation that such technology might create under your leadership.
However, there is danger in making AI policy goals just as invested in the promise of AGI as are the tech sector's leaders. When policymakers buy the hype, the public pays for it.
## **AGI is Unlikely in the Near Term**
First, it’s important to establish that there is good reason for skepticism about claims that AGI is imminent, despite the speculative fever amongst industry figures and [some in the press](https://www.nytimes.com/2025/03/04/opinion/ezra-klein-podcast-ben-buchanan.html). A [recent survey](https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf) of 475 AI researchers by the Association for the Advancement of Artificial Intelligence (AAAI) conducted as part of its panel on the future of AI research found that “\[t\]he majority of respondents (76%) assert that ‘scaling up current AI approaches’ to yield AGI is ‘unlikely’ or ‘very unlikely’ to succeed, suggesting doubts about whether current machine learning paradigms are sufficient for achieving general intelligence.” The many limitations of transformer-based architectures suggest AGI is hardly right around the cor
... (truncated, 18 KB total)Resource ID:
fe6c042996d3aa1b | Stable ID: ZWFhNGIyOT