Back
Human Extinction Threat from AI Overblown, Says Gary Marcus
webA 2023 news article capturing Gary Marcus's skeptical stance on AI existential risk claims, useful for understanding the debate between x-risk advocates and critics who question the plausibility of near-term catastrophic AI scenarios.
Metadata
Importance: 30/100news articlenews
Summary
AI researcher and critic Gary Marcus argues that fears of human extinction from AI are exaggerated, pushing back against prominent AI doom narratives. Marcus contends that current AI systems are fundamentally limited and that catastrophic existential risk claims distract from more immediate, tractable AI harms. The article presents his skeptical perspective as a counterpoint to the wave of extinction-risk warnings from figures like Geoffrey Hinton and others in 2023.
Key Points
- •Gary Marcus argues that AI extinction risk is overstated and that current LLMs lack the reasoning capabilities needed to pose existential threats.
- •Marcus contends that doom narratives distract attention and resources from real, near-term AI harms like misinformation and bias.
- •The article situates Marcus's views against the backdrop of high-profile 2023 AI safety warnings, including the open letter signed by prominent researchers.
- •Marcus advocates for regulation and oversight focused on concrete, demonstrable harms rather than speculative catastrophic scenarios.
- •His perspective represents a significant minority view among AI researchers who are skeptical of x-risk framing.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| The Case Against AI Existential Risk | Argument | 58.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20265 KB
[Skip to main content](https://www.france24.com/en/live-news/20230604-human-extinction-threat-overblown-says-ai-sage-marcus#main-content)
Advertising
# Human extinction threat 'overblown' says AI sage Marcus
San Francisco (AFP) – Ever since the poem churning ChatGPT burst on the scene six months ago, expert Gary Marcus has voiced caution against artificial intelligence's ultra-fast development and adoption.
Issued on: 04/06/2023 - 03:38
3 min Reading time
Share

But against AI's apocalyptic doomsayers, the New York University emeritus professor told AFP in a recent interview that the technology's existential threats may currently be "overblown."
"I'm not personally that concerned about extinction risk, at least for now, because the scenarios are not that concrete," said Marcus in San Francisco.
"A more general problem that I am worried about... is that we're building AI systems that we don't have very good control over and I think that poses a lot of risks, (but) maybe not literally existential."
Long before the advent of ChatGPT, Marcus designed his first AI program in high school -- software to translate Latin into English -- and after years of studying child psychology, he founded Geometric Intelligence, a machine learning company later acquired by Uber.
## 'Why AI?'
In March, alarmed that ChatGPT creator OpenAI was releasing its latest and more powerful AI model with Microsoft, Marcus signed an open letter with more than 1,000 people including Elon Musk calling for a global pause in AI development.
But last week he did not sign the more succinct statement by business leaders and specialists -- including OpenAI boss Sam Altman -- that caused a stir.
Global leaders should be working to reduce "the risk of extinction" from artificial intelligence technology, the signatories insisted.
The one-line statement said tackling the risks from AI should be "a global priority alongside other societal-scale risks such as pandemics and nuclear war".
Signatories included those who are building systems with a view to achieving "general" AI, a technology that would hold the cognitive abilities on par with those of humans.
"If you really think there's existential risk, why are you working on this at all? That's a pretty fair question to ask," Marcus said.
Instead of putting the focus on more far-fetched scenarios where no one survives, society should be putting attention on where real dangers lie, Marcus surmised.
"People might try to manipulate the markets by using AI to cause all kinds of mayhem and then we might, for example, blame the Russians and say, 'look what they've done to our country' when the Russians actually weren't involved," he continued.
"You (could) have this escalation that winds up in nuclear war or something like that. So I think there are scenarios where
... (truncated, 5 KB total)Resource ID:
9a7642dfbd957ca5 | Stable ID: NGU1Y2E4OT