Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: AI Now Institute

Published by the AI Now Institute, a prominent civil society research organization focused on AI's social implications; relevant to debates about how governments institutionalize AI oversight and what risks get prioritized.

Metadata

Importance: 42/100press releasecommentary

Summary

The AI Now Institute responds to the UK government's decision to rename and reorient the AI Safety Institute into the AI Security Institute, expressing concern that this shift deprioritizes broader societal safety in favor of narrower security framing. The statement argues this transition risks sidelining critical work on AI harms affecting workers, marginalized communities, and democratic institutions. It calls for maintaining robust, independent AI safety oversight.

Key Points

  • AI Now criticizes the rebranding of the UK AI Safety Institute as signaling a retreat from comprehensive AI safety toward a narrower national security focus.
  • The shift is seen as potentially deprioritizing harms to workers, marginalized communities, and civil society in favor of state-centric security concerns.
  • The statement warns that framing AI governance primarily around 'security' can entrench existing power imbalances and benefit incumbent tech interests.
  • AI Now calls for maintaining independent, publicly accountable AI safety institutions that address the full spectrum of AI-related risks.
  • The transition reflects a broader political trend of governments narrowing AI oversight mandates amid industry lobbying pressure.

Cited by 1 page

PageTypeQuality
UK AI Safety InstituteOrganization52.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20262 KB
_On February 14th, 2025 the UK Department for Science, Innovation and Technology [announced](https://www.gov.uk/government/news/tackling-ai-security-risks-to-unleash-growth-and-deliver-plan-for-change) the UK AI Safety Institute’s transition to the UK AI Security Institute. Read AI Now’s statement on the transition below_.

AISI’s partnership with the Defence Science and Technology Laboratory, the Ministry of Defence’s science and technology organisation, heralds the UK government turning its attention to focus on frontier AI use within the defense and national security apparatus. This comes on the heels of a slew of recent announcements that major AI companies are to be integrating their frontier AI models into national security use cases. As our research has demonstrated, these systems carry significant risks, including [threats to national security](https://www.nytimes.com/2025/01/27/opinion/ai-trump-military-national-security.html) given the cyber vulnerabilities inherent to frontier AI models, and that the sensitive data for which they may be trained can be extracted by adversaries.

**While we welcome AISI’s signals to potentially investigate these risks amidst heightened “AI race” dynamics, we warn against an approach that under the banner of security would apply piecemeal or superficial scrutiny that gives these systems a clean chit before they are ready. These issues cannot be easily fixed or patched, and** [**require independent safety critical evaluation**](https://arxiv.org/abs/2410.14831) **which must be insulated from industry partnerships.** If our leaders barrel ahead with their plans to implement frontier AI for defense use, they risk undermining our national security. This is a trade-off that AI’s purported benefits cannot justify.

_This statement can be attributed to AI Now Chief AI Scientist Heidy Khlaaf_.

Research Areas

[Safety & Security](https://ainowinstitute.org/research-areas/safety-security)
Resource ID: b74d58838b250981 | Stable ID: YzYyYjA2Nj