Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: TIME

A public-facing opinion piece introducing a communicative risk metric for AGI; useful as an example of science communication on x-risk but lacks technical rigor or novel empirical content.

Metadata

Importance: 32/100news articlecommentary

Summary

Michael Wade introduces the AI Safety Clock, a metric placing humanity at '29 minutes to midnight' regarding existential risks from uncontrolled AGI. The clock tracks three factors—AI sophistication, autonomy, and integration with physical systems—to communicate urgency around AI development trajectories. Wade argues that while catastrophe has not yet occurred, accelerating capabilities and regulatory complexity demand immediate stakeholder attention.

Key Points

  • The AI Safety Clock currently reads 29 minutes to midnight, signaling we are roughly halfway to a potential tipping point of uncontrolled AGI.
  • Three tracked dimensions drive the clock: AI sophistication (ML/neural network advances), autonomy (limited but growing independence), and physical system integration.
  • Current AI systems still largely depend on human direction and perform narrow tasks, but autonomous vehicles and recommendation systems show early signs of independence.
  • Integration into critical infrastructure (energy grids, financial markets, military) remains incomplete but is advancing, particularly with autonomous drones.
  • The clock is intended as a communication and awareness tool, not alarmism, grounded in observable data on AI development pace and governance gaps.

Cited by 1 page

PageTypeQuality
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202610 KB
- [Ideas](https://time.com/section/ideas)
- [AI in Focus](https://time.com/tag/ai-in-focus)

# I Launched the AI Safety Clock. Here’s What It Tells Us About Existential Risks

[ADD TIME ON GOOGLE](https://www.google.com/preferences/source?q=https://time.com)

Show me more content from TIME on Google Search

by

[Michael Wade](https://time.com/author/michael-wade/)

Oct 13, 2024 7:00 AM ET

![Wall clock in office desk with big sunset sun light effect.](https://time.com/redesign/_next/image/?url=https%3A%2F%2Fapi.time.com%2Fwp-content%2Fuploads%2F2024%2F10%2FGettyImages-1317754985.jpg%3Fquality%3D85%26w%3D1800&w=3840&q=75)

Getty Images

by

[Michael Wade](https://time.com/author/michael-wade/)

Oct 13, 2024 7:00 AM ET

If uncontrolled [artificial general intelligence](https://time.com/6556168/when-ai-outsmart-humans/)—or “God-like” AI—is looming on the horizon, we are now about halfway there. Every day, the clock ticks closer to a potential doomsday scenario.

That’s why I introduced the [AI Safety Clock](https://www.imd.org/centers/tonomus/digital-ai-transformation-center/aisafetyclock/) last month. My goal is simple: I want to make clear that the dangers of uncontrolled AGI are real and present. The Clock’s current reading—29 minutes to midnight—is a measure of just how close we are to the critical tipping point where uncontrolled AGI could bring about existential risks. While no catastrophic harm has happened yet, the breakneck speed of AI development and the complexities of regulation mean that all stakeholders must stay alert and engaged.

This is not alarmism; it’s based on hard data. The AI Safety Clock tracks three essential factors: the growing sophistication of AI technologies, their increasing autonomy, and their integration with physical systems.

We are seeing remarkable strides across these three factors. The biggest are happening in machine learning and neural networks, with AI now outperforming humans in specific areas like image and speech recognition, mastering complex games [like Go](https://time.com/4252312/googles-artificial-intelligence-beats-legendary-go-player/), and even passing tests such as [business school exams](https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP.pdf) and [Amazon coding interviews](https://www.businessinsider.com/chatgpt-amazon-job-interview-questions-answers-correctly-2023-1).

**Read More:** _[Nobody Knows How to Safety-Test AI](https://time.com/6958868/artificial-intelligence-safety-evaluations-risks/)_

Despite these advances, most AI systems today still depend on human direction, [as noted](https://hai.stanford.edu/news/ai-loop-humans-must-remain-charge) by the Stanford Institute for Human-Centered Artificial Intelligence. They are built to perform narrowly defined tasks, guided by the data and instructions we provide.

That said, some AI systems are already showing signs of limited independence. [Autonomous vehicles](https://www.bbc.com/future/article/2

... (truncated, 10 KB total)
Resource ID: 246e6e1c19b04bbb | Stable ID: MmViN2EzYj