Systemic Safety Grants
governmentCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: UK AI Safety Institute
Official UK AISI announcement of a government-funded research grants programme; relevant to researchers seeking funding for societal AI risk work and to those tracking how governments are operationalizing AI safety research agendas.
Metadata
Summary
The UK AI Safety Institute launched a Systemic AI Safety Grants programme offering up to £200,000 to researchers from academia, industry, and civil society. The programme targets broader societal risks from AI deployment—including sector-specific impacts in healthcare, education, and finance, multi-agent interactions, and infrastructure vulnerabilities—beyond individual model capabilities. It aims to map research priorities, develop new methodologies, and inform government policy ahead of anticipated rapid AI adoption over the next 2-5 years.
Key Points
- •Grants of up to £200,000 available to researchers studying systemic risks from AI deployment across society and specific sectors.
- •Systemic AI safety focuses on societal-level risks beyond individual model capabilities, including multi-agent interactions and critical infrastructure vulnerabilities.
- •Programme seeks to understand AI risks in high-stakes sectors like healthcare, education, and finance, requiring both technical and sociotechnical approaches.
- •Findings are intended to directly inform UK government policy and priority interventions before risks become severe harms.
- •First announced at the AI Seoul Summit; part of AISI's broader mission to prepare for rapid AI progress and increased adoption within 2-5 years.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| UK AI Safety Institute | Organization | 52.0 |
Cached Content Preview
Advancing the field of systemic AI safety: grants open | AISI Work
Read the Frontier AI Trends Report Please enable javascript for this website.
A
A
Careers
Blog
Organisation Advancing the field of systemic AI safety: grants open
Calling researchers from academia, industry, and civil society to apply for up to £200,000 of funding.
— Oct 15, 2024 Chris Summerfield, Shahar Avin Note to readers: we changed our name to the AI Security Institute on 14 February 2025. Read more here.
Introduction
At the AI Safety Institute (AISI), we work to understand and measure a wide spectrum of AI risks, which can then inform decision making by governments and policy makers.
One key area of focus is systemic AI safety, an emerging area of research, which aims to understand and mitigating the broader societal risks associated with AI deployment, beyond the capabilities of individual models. It is critical for us to advance this field, to map priority areas of research and to develop new methods and approaches. We want to be prepared for the possibility of continued rapid progress in AI R&D, and for significantly increased adoption of AI across various sectors in the next 2-5 years. The program seeks to understand, anticipate, and mitigate potential risks. Our Systemic AI Safety Grants programme, first announced at the AI Seoul Summit, is designed to expand the field and deepen our understanding of the topic.
Tackling these risks head on will boost public confidence in the range of AI innovations which are being increasingly adopted across the economy, sparking long-term growth and keeping the UK at the heart of research into responsible and trustworthy AI development. Ensuring public confidence in AI is central to the government’s plans for seizing its potential, as the UK harnesses the technology to drive up productivity and deliver public services which are fit for the future. To ensure the UK can continue to harness the enormous opportunities of AI innovations, the government has also committed to introduce highly targeted legislation for the handful of companies developing the most powerful AI models, ensuring a proportionate approach to regulation rather than new blanket rules on its use.
What is systemic AI safety?
Systemic AI safety is a field that aims to understand and mitigate the broader societal risks associated with AI deployment, beyond the capabilities of individual models. It is critical for us to advance this field and map priority areas of research. These include evaluations of dangerous capabilities and safeguards, studies on user interactions with models, and work on risk governance through protocols and safety cases. We are now expanding our focus to the systemic impact of frontier AI systems.
Systemic AI safety focuses on risks and mitigations in the context of AI deployment, both in specific sectors and across society. For example, we want to know what risks could emerge when frontier AI i
... (truncated, 7 KB total)5afddab390f2dcdb | Stable ID: Nzc0ZjQ5Y2