Skip to content
Longterm Wiki
Back

Yoshua Bengio joins Safeguarded AI as Scientific Director

blog

Credibility Rating

2/5
Mixed(2)

Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.

Rating inherited from publication venue: Substack

Metadata

1 FactBase fact citing this source

Cached Content Preview

HTTP 200Fetched Apr 7, 20264 KB
Yoshua Bengio joins Safeguarded AI as Scientific Director
 
 
 
 
 

 

 

 
 

 
 
 

 
 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 

 
 
 
 

 
 
 
 

 
 
 
 
 
 
 
 

 

 
 

 

 
 
 

 

 
 

 

 

 

 

 

 
 

 

 

 
 
 
 

 Oct
 NOV
 Dec
 

 
 

 
 07
 
 

 
 

 2024
 2025
 2026
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20251107185153/https://ariaresearch.substack.com/p/yoshua-bengio-joins-safeguarded-ai

 
 

 

 

 

 

 

 

 

ARIA

SubscribeSign in

Yoshua Bengio joins Safeguarded AI as Scientific Director

Aug 07, 2024

1

Share

We're excited to welcome Professor Yoshua Bengio as Scientific Director for Safeguarded AI, supporting the work led by Programme Director ‘davidad’ Dalrymple. 

A world-renowned computer scientist and a pioneer in deep learning, Yoshua was awarded the 2018 Turing Award, often referred to as the "Nobel Prize of Computing", for his groundbreaking work in artificial intelligence.

Yoshua’s work on the Cautious Scientist AI research agenda, which seeks to develop efficient methods for providing high-assurance guarantees that an AI system won’t take harmful actions, aligns closely with the Safeguarded AI programme, through which we’re looking to construct a ‘gatekeeper’ – an AI system tasked with understanding and reducing the risks of other AI agents. Both approaches are described in this position paper, co-authored by davidad, Yoshua, and others, describing a family of AI safety approaches termed ‘Guaranteed Safe AI’. 

As with other ARIA programmes, Safeguarded AI is split into different ‘Technical Areas’ (TAs) – each with a specific aim and budget, and each designed to engage researchers from a variety of disciplines. Our most recent call (TA3) aims to demonstrate that an AI gatekeeper is able to improve on both performance and reliability in safety-critical domains compared to current operations. The programme will bring together research teams to work with global experts, academics, and entrepreneurs, to lay the groundwork for deploying Safeguarded AI in one or more areas.

As Scientific Director, Yoshua will work with the Safeguarded AI team and our R&D Creators, providing scientific and strategic advice across the full programme, with a particular focus on TA3 and TA2.

Hear more from davidad + Yoshua in this interview with MIT Tech Review.

Get involved

Apply for TA3 funding: If you're working on optimising energy systems, clinical trials, telecommunications networks, or similar mission-critical applications, we encourage you to apply. 

Express interest in TA2 engagement: We are seeking lightweight initial Expressions of Interest from individuals 

... (truncated, 4 KB total)
Resource ID: kb-b094c0a16192d192