Skip to content
Longterm Wiki
Back

The IMD AI Safety Clock is a risk-tracking metric from the TONOMUS Global Center for Digital & AI Transformation; this update article is useful context for monitoring institutional risk assessments of AGI timelines and threat dimensions, though it is a journalistic summary rather than a technical or policy document.

Metadata

Importance: 42/100organizational reportnews

Summary

The IMD AI Safety Clock has made its largest single jump to 23:40 (20 minutes to midnight), driven by advances in agentic AI, weaponization concerns, Chinese AI competition, and fragmented global regulation. The clock tracks three dimensions—AI sophistication, autonomy, and execution—to signal proximity to uncontrolled AGI. Over 12 months since launch, the clock has advanced nine minutes total, indicating an accelerating pace of risk escalation.

Key Points

  • Clock advanced 4 minutes to 23:40 in its largest single update, totaling 9 minutes of advancement over 12 months since September 2024 launch.
  • Key drivers include agentic AI reasoning breakthroughs, AI weaponization, rise of Chinese AI models, and deepening AI-defense sector ties.
  • Assessment uses three dimensions: sophistication (reasoning), autonomy (independence from human oversight), and execution (physical-world influence).
  • Combines quantitative data from thousands of sources with expert qualitative judgment, inspired by the Doomsday Clock framework.
  • Clock is designed as a trend signal toward 'Uncontrolled Artificial General Intelligence' (UAGI), not a precise prediction of catastrophe timing.

Cited by 2 pages

PageTypeQuality
AI-Induced IrreversibilityRisk64.0
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202623 KB
##### Share

[FacebookFacebook icon](https://facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.imd.org%2Fibyimd%2Fartificial-intelligence%2Fimd-ai-safety-clock-makes-biggest-leap-yet-amid-weaponization-and-rise-of-agentic-ai%2F) [TwitterTwitter icon](https://twitter.com/intent/tweet/?text=IMD+AI+Safety+Clock+makes+biggest+leap+yet+amid+weaponization+and+rise+of+agentic+AI&url=https%3A%2F%2Fwww.imd.org%2Fibyimd%2Fartificial-intelligence%2Fimd-ai-safety-clock-makes-biggest-leap-yet-amid-weaponization-and-rise-of-agentic-ai%2F) [LinkedInLinkedIn icon](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.imd.org%2Fibyimd%2Fartificial-intelligence%2Fimd-ai-safety-clock-makes-biggest-leap-yet-amid-weaponization-and-rise-of-agentic-ai%2F&title=IMD+AI+Safety+Clock+makes+biggest+leap+yet+amid+weaponization+and+rise+of+agentic+AI) [Email](mailto:?subject=IMD%20AI%20Safety%20Clock%20makes%20biggest%20leap%20yet%20amid%20weaponization%20and%20rise%20of%20agentic%20AI&body=https%3A%2F%2Fwww.imd.org%2Fibyimd%2Fartificial-intelligence%2Fimd-ai-safety-clock-makes-biggest-leap-yet-amid-weaponization-and-rise-of-agentic-ai%2F)

### One year after launch, AI risk tracker underlines worrying implications of accelerating AI adoption.

The [IMD AI Safety Clock has jumped four minutes forward to 23:40, just 20 minutes to midnight.](https://www.imd.org/centers/tonomus/digital-ai-transformation-center/aisafetyclock/) The largest single adjustment since its launch a year ago, the decision was driven by a new phase in AI development and deployment. Over the course of 12 months, the AI Safety Clock has advanced nine minutes closer to midnight, highlighting the accelerating pace of risk escalation.

![23-40](https://www.imd.org/wp-content/uploads/2025/09/23-40.jpg)

## Tracking humanity’s race with AI

In September 2024, the [IMD AI Safety Clock](https://www.imd.org/centers/tonomus/digital-ai-transformation-center/aisafetyclock/) was launched by the [TONOMUS Global Center for Digital & AI Transformation](https://www.imd.org/centers/tonomus/digital-ai-transformation-center/aisafetyclock/). The clock is a metaphorical risk gauge that estimates how close humanity is to a tipping point where Uncontrolled Artificial General Intelligence (UAGI) could pose serious threats. Inspired by the [Doomsday Clock,](https://thebulletin.org/doomsday-clock/) which measures existential risks from nuclear conflict, the AI Safety Clock tracks our distance from midnight, the moment when UAGI could become uncontrollable or dangerously misaligned with human interests.

Unlike a prediction of exact dates or events, the clock is designed as a signal of trends. It combines quantitative data from thousands of news feeds, research reports, and monitoring systems with expert qualitative judgment. The assessment focuses on three key dimensions: sophistication, or the reasoning and problem-solving abilities of AI, autonomy, or the independence with which AI systems operate without human ov

... (truncated, 23 KB total)
Resource ID: 23067a0dd2856cc6 | Stable ID: ZmUzYmRiZW