Skip to content
Longterm Wiki
Back

A public-facing risk communication tool from IMD Business School intended to make AI safety urgency legible to non-technical audiences including executives and policymakers; useful as a reference for how AI risk is being framed in mainstream institutional contexts.

Metadata

Importance: 42/100tool pagetool

Summary

The IMD AI Safety Clock is a visual indicator tool developed by IMD Business School and TONOMUS that tracks how close humanity may be to a critical AI safety threshold, analogous to the Bulletin of Atomic Scientists' Doomsday Clock. It synthesizes expert assessments of AI risk factors to communicate urgency around AI safety governance and the need for proactive intervention before irreversible harms occur.

Key Points

  • Modeled after the Doomsday Clock concept, it uses a clock metaphor to represent proximity to a critical AI safety threshold.
  • Aggregates expert input on risk factors including capability growth, governance gaps, and alignment progress.
  • Designed to communicate AI existential and catastrophic risk urgency to business leaders and policymakers.
  • Highlights path-dependence and value lock-in concerns, emphasizing that inaction now may foreclose safer futures.
  • Serves as a public-facing communication and advocacy tool to mobilize awareness around AI safety timelines.

Cited by 2 pages

PageTypeQuality
AI-Induced IrreversibilityRisk64.0
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202633 KB
[Home](https://www.imd.org/)[Research & Knowledge](https://www.imd.org/research-knowledge/)[back to Home](https://www.imd.org/)[Centers & Initiatives](https://www.imd.org/centers-initiatives/)[back to Research & Knowledge](https://www.imd.org/research-knowledge/)[Digital & AI Transformation Center](https://www.imd.org/centers/dbt/imd-digital-business-transformation-center/)[back to Centers & Initiatives](https://www.imd.org/centers-initiatives/)AI Safety Clock[back to Digital & AI Transformation Center](https://www.imd.org/centers/dbt/imd-digital-business-transformation-center/)

![20241019-Center-AI-clock-Topbanner-desktop 2880×1120 (1) - IMD Business School](https://www.imd.org/wp-content/uploads/2024/05/20241019-Center-AI-clock-Topbanner-desktop-2880x1120-1-1-scaled.jpg)![20241019-Center-AI-clock-Topbanner-tablet 2400×1100 (1) - IMD Business School](https://www.imd.org/wp-content/uploads/2024/05/20241019-Center-AI-clock-Topbanner-tablet-2400x1100-1-2.jpg)![20241019-Center-AI-clock-Topbanner-mobile 1500×800 (1) - IMD Business School](https://www.imd.org/wp-content/uploads/2024/05/20241019-Center-AI-clock-Topbanner-mobile-1500x800-1-1.jpg)

AI Safety Clock


Evaluating the risk of Uncontrolled Artificial General Intelligence

# AI Safety Clock

Evaluating the risk of Uncontrolled Artificial General Intelligence

The IMD AI Safety Clock is a tool designed to evaluate the risks of Uncontrolled Artificial General Intelligence (UAGI) – autonomous AI systems that operate without human oversight and could potentially cause significant harm.

Our mission is to evaluate and communicate these risks to the public, policymakers, and business leaders, helping ensure the safe development and use of AI technologies.

![20260304-Safety-clock-42-animation-2lancets-colors-1500x1500 - IMD Business School](https://www.imd.org/wp-content/uploads/2024/05/20260304-Safety-clock-42-animation-2lancets-colors-1500x1500-1.gif)

What is the current reading on the AI Safety Clock?

A time for vigilance

Our latest assessment places the Clock **18 minutes from midnight**, a reflection of significant advances in artificial intelligence across three key dimensions: sophistication, autonomy, and execution. Large language models grew more capable, more efficient, and more widely available, with four major frontier models released in a single 25-day span. Autonomous AI agents moved from experimental prototypes to mainstream enterprise deployments across major platforms, with Microsoft, Google, and GitHub all launching agent orchestration tools. AI systems became physically embodied in robots, embedded in critical infrastructure, and increasingly integrated into military applications. Meanwhile, the regulatory landscape diverged sharply: the European Union began enforcing the world’s first comprehensive AI law with substantial penalties, while the Trump administration pursued an aggressive military AI strategy, declaring the Pentagon an “AI-first warfighting force” and pressuring 

... (truncated, 33 KB total)
Resource ID: da390c7cc819b788 | Stable ID: YjBiOTBlOT