Advanced Research and Invention Agency (ARIA)
GovernmentThe UK's Advanced Research + Invention Agency (ARIA) is a government research funding body modeled on DARPA. Its Safeguarded AI programme, led by Programme Director David "davidad" Dalrymple with Scientific Director Yoshua Bengio (joined Aug 2024), has committed GBP 59 million over 3-5 years to develop AI systems with mathematical safety guarantees through formal verification. The programme funds 48 projects across multiple technical areas: TA1.1 (theory, GBP 3.5M, 22 projects), TA1.2+1.3 (platform, GBP 14.2M), TA1.4 (sociotechnical, GBP 3.4M), TA2 (ML, GBP 1M Phase 1), and TA3 (applications, GBP 5.4M, 9 projects). A Nov 2025 pivot abandoned TA2 Phase 2 (GBP 18M) and cancelled TA3 Phase 2 (GBP 8.4M), expanding TA1 scope to broader mathematical assurance and adding a cybersecurity focus on formally-verified firewalls for critical infrastructure.
Facts
7Divisions
6ARIA's flagship AI safety programme, led by Programme Director David 'davidad' Dalrymple with Scientific Director Yoshua Bengio (joined Aug 2024). GBP 59M committed. Nov 2025 pivot expanded TA1 scope to broader 'mathematical assurance and auditability', abandoned TA2 Phase 2, cancelled TA3 Phase 2 in favor of cybersecurity focus.
GBP 3.5M Phase 1 across 22 projects. Mathematical representations and formal semantics for world-models, specifications, and proofs. Covers category theory, probabilistic logic, and formal verification foundations. Scope expanded in Nov 2025 pivot.
GBP 14.2M across 8 projects. TA1.2 (backend): proof checking, automated reasoning, GPU optimization. TA1.3 (human-computer interface): collaborative modeling, type-theoretic environments.
GBP 3.4M across 6 teams. Law-following AI, formal models of society, governance models, privacy-preserving verification, preference aggregation, and deliberative AI specifications.
GBP 5.4M Phase 1 across 9 teams (continuing to completion). Applications in energy grid, automated driving, clinical trials, logistics, biopharmaceuticals, and telecom. Phase 2 (GBP 8.4M) cancelled Nov 2025; replaced by cybersecurity pivot to formally-verified firewalls for critical infrastructure.
1 inactive division
Phase 1: GBP 1M across 3 teams (completed). Phase 2 (GBP 18M) abandoned in Nov 2025 pivot — frontier AI advances made dedicated ML capability development less valuable. Funds redirected to expand TA1.
Related Wiki Pages
Top Related Pages
Yoshua Bengio
Turing Award winner and deep learning pioneer who became a prominent AI safety advocate, co-founding safety research initiatives at Mila and co-sig...
David Dalrymple
AI safety researcher and Programme Director at ARIA, leading a £59M Safeguarded AI programme focused on formal verification and mathematical guaran...
Provably Safe AI (davidad agenda)
An ambitious research agenda to design AI systems with mathematical safety guarantees from the ground up, led by ARIA's £59M Safeguarded AI program...
UK AI Safety Institute
The UK AI Safety Institute (renamed AI Security Institute in February 2025) is a government body with approximately 30+ technical staff and an annu...