Skip to content
Longterm Wiki

Advanced Research and Invention Agency (ARIA)

Government
Founded 2022 (4 years old)HQ: London, UKaria.org.ukWiki page →FactBase →

The UK's Advanced Research + Invention Agency (ARIA) is a government research funding body modeled on DARPA. Its Safeguarded AI programme, led by Programme Director David "davidad" Dalrymple with Scientific Director Yoshua Bengio (joined Aug 2024), has committed GBP 59 million over 3-5 years to develop AI systems with mathematical safety guarantees through formal verification. The programme funds 48 projects across multiple technical areas: TA1.1 (theory, GBP 3.5M, 22 projects), TA1.2+1.3 (platform, GBP 14.2M), TA1.4 (sociotechnical, GBP 3.4M), TA2 (ML, GBP 1M Phase 1), and TA3 (applications, GBP 5.4M, 9 projects). A Nov 2025 pivot abandoned TA2 Phase 2 (GBP 18M) and cancelled TA3 Phase 2 (GBP 8.4M), expanding TA1 scope to broader mathematical assurance and adding a cybersecurity focus on formally-verified firewalls for critical infrastructure.

Total Funding Raised
£59 million
as of Nov 2025
Grants Made
$920K
92 grants

Facts

7
Financial
Total Funding Raised£59 million
Grant Count48
Organization
Founded Date2022
HeadquartersLondon, UK
Legal StructureNon-departmental public body (UK government agency)
General
Websitehttps://www.aria.org.uk

Divisions

6
Program

ARIA's flagship AI safety programme, led by Programme Director David 'davidad' Dalrymple with Scientific Director Yoshua Bengio (joined Aug 2024). GBP 59M committed. Nov 2025 pivot expanded TA1 scope to broader 'mathematical assurance and auditability', abandoned TA2 Phase 2, cancelled TA3 Phase 2 in favor of cybersecurity focus.

Program

GBP 3.5M Phase 1 across 22 projects. Mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, and formal verification foundations. Scope expanded in Nov 2025 pivot.

Program

GBP 14.2M across 8 projects. TA1.2 (backend): proof checking, automated reasoning, GPU optimization. TA1.3 (human-computer interface): collaborative modeling, type-theoretic environments.

Program

GBP 3.4M across 6 teams. Law-following AI, formal models of society, governance models, privacy-preserving verification, preference aggregation, and deliberative AI specifications.

Program

GBP 5.4M Phase 1 across 9 teams (continuing to completion). Applications in energy grid, automated driving, clinical trials, logistics, biopharmaceuticals, and telecom. Phase 2 (GBP 8.4M) cancelled Nov 2025; replaced by cybersecurity pivot to formally-verified firewalls for critical infrastructure.

1 inactive division
Program

Phase 1: GBP 1M across 3 teams (completed). Phase 2 (GBP 18M) abandoned in Nov 2025 pivot — frontier AI advances made dedicated ML capability development less valuable. Funds redirected to expand TA1.

Related Wiki Pages

Top Related Pages

Approaches

Formal Verification (AI Safety)

Organizations

US AI Safety InstituteFrontier Model ForumChan Zuckerberg Initiative

Concepts

Provable / Guaranteed Safe AIScientific Research Capabilities

Other

Ben GoldhaberRamana Kumar