Back
IMD Launches AI Safety Clock
webThis IMD initiative uses a 'Safety Clock' metaphor to communicate AI risk urgency to business and policy audiences; relevant to discussions of value lock-in, x-risk framing, and public governance communication strategies.
Metadata
Importance: 38/100news articlenews
Summary
IMD Business School launched an 'AI Safety Clock' initiative to track and signal proximity to critical AI safety thresholds, analogous to the Doomsday Clock. The tool aims to raise awareness among business leaders and policymakers about the urgency of AI safety concerns and governance needs.
Key Points
- •IMD introduced an AI Safety Clock as a symbolic and analytical tool to measure progress or regression on AI safety globally.
- •The clock concept is designed to communicate risk urgency to non-technical audiences including executives and policymakers.
- •The initiative draws parallels to the Bulletin of Atomic Scientists' Doomsday Clock, applying similar framing to AI existential risks.
- •The project highlights growing institutional concern about AI reaching points of no return without adequate safety frameworks.
- •IMD positions this as a governance and awareness tool amid rapidly accelerating AI capabilities.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Induced Irreversibility | Risk | 64.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20264 KB
[Home](https://www.imd.org/)[All News](https://www.imd.org/news/)[back to Home](https://www.imd.org/)IMD launches AI Safety Clock[back to All News](https://www.imd.org/news/)

News Stories · Artificial Intelligence
# IMD creates AI Safety Clock
New tool evaluates the threat to humanity of Uncontrolled Artificial General Intelligence
September 2024
IMD has created an [AI Safety Clock](https://www.imd.org/centers/tonomus/digital-ai-transformation-center/aisafetyclock/ "") to evaluate the risk posed by artificial intelligence outside of human control.
Inspired by the original ‘Doomsday Clock’ that shows how close humanity is to nuclear annihilation, a team of experts at IMD’s TONOMUS Global Center for Digital and AI Transformation has set the clock at 29 minutes to midnight, a symbolic representation of the point when Uncontrolled Artificial General Intelligence (UAGI) could inflict significant harm on humanity.
The AI Safety Clock was built following a wide-ranging evaluation of factors that drive AI-related risks. This is supported by a proprietary dashboard that tracks developments across over 1,000 websites and 3,470 news feeds, providing real-time insights and continuous monitoring of technological and regulatory advancements.
The team of experts focused on potential risks posed by uncontrolled AGI – artificial intelligence which could act independently of human control and in effect wield its own power.
While technological developments such as new models, chips, autonomous agents, robotics and weaponisation push the clock closer towards midnight, IMD emphasizes that, alongside regulation, companies developing AI also play a vital role in mitigating the risks posed by AGI. Effective collaboration between governments and the private sector is critical to addressing these challenges.
By evaluating and communicating the risks to the public, policymakers, and business leaders, the AI Safety Clock’s creators aim to help ensure the safe development and use of AI technologies.
“The IMD AI Safety Clock is designed to raise awareness, not alarm. As we set the Clock for the first time, the message is that the risk is serious, but it’s not too late to act,” said Michael Wade, Professor of Innovation and Strategy and Director of IMD’s [TONOMUS Global Center for Digital and AI Transformation](https://www.imd.org/centers/tonomus/digital-ai-transformation-center/ ""). “Uncontrolled AGI could cause havoc for us all, and we are moving from a time of medium risk to one of high risk. But we’re not past the point of no return.
“Effective, joined-up regulation can constrain the worst tendencies of new technologies without losing their benefits, and we call on international actors and tech giants alike to act in all of our best interests.”
Share
Resource ID:
9d06f4c3cadab9b9 | Stable ID: MTRlZDUzMz