Skip to content
Longterm Wiki
Back

Future of Life Institute's research with Mithril Security

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Future of Life Institute

This work is relevant to compute governance debates around verifying AI training thresholds, particularly in the context of international agreements or regulatory frameworks that require hardware-level enforcement mechanisms.

Metadata

Importance: 62/100organizational reportanalysis

Summary

This resource presents research by the Future of Life Institute in collaboration with Mithril Security on using hardware-based mechanisms to enforce compute governance for AI systems. It explores how trusted execution environments and hardware attestation can provide verifiable oversight of AI training and deployment. The work addresses how physical hardware constraints can serve as enforceable policy levers.

Key Points

  • Explores hardware-backed mechanisms (e.g., trusted execution environments) to enforce limits on AI compute usage
  • Proposes using hardware attestation to create verifiable, tamper-resistant compute governance frameworks
  • Addresses the gap between policy intentions and enforceable technical controls in AI oversight
  • Collaboration between FLI (policy focus) and Mithril Security (confidential computing expertise) bridges technical and governance domains
  • Relevant to international AI governance discussions where verifiable compliance with compute thresholds is needed

Cited by 1 page

PageTypeQuality
Compute MonitoringApproach69.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202613 KB
[Skip to content](https://futureoflife.org/ai-policy/hardware-backed-compute-governance/#main-content)

# Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.

Published:

November 30, 2023

Author:

Future of Life Institute

![](https://futureoflife.org/wp-content/uploads/2024/07/Secure-hardware-solution.jpeg)

#### Contents

1. [Introduction](https://futureoflife.org/ai-policy/hardware-backed-compute-governance/#toc-44245428-0)
2. [Hardware-backed security today](https://futureoflife.org/ai-policy/hardware-backed-compute-governance/#toc-44245428-1)
3. [Proof-of-concept: Secure AI deployment](https://futureoflife.org/ai-policy/hardware-backed-compute-governance/#toc-44245428-2)
4. [Open-source deliverables available](https://futureoflife.org/ai-policy/hardware-backed-compute-governance/#toc-44245428-3)
5. [Future investigations](https://futureoflife.org/ai-policy/hardware-backed-compute-governance/#toc-44245428-4)

## Introduction

AI safety has become a key subject with the recent progress of AI. Debates on the topic have helped outline desirable properties a safe AI should follow, such as provenance (where does the model come from), confidentiality (how to ensure the confidentiality of prompts or of the model weights), or transparency (how to know what model is used on data).

While such discussions have been necessary to define what properties such models should have, they are not _sufficient,_ as there are few technical solutions to actually guarantee that those properties are implemented in production.

See our other post with Mithril Security on [verifiable training of AI models](https://futureoflife.org/ai/verifiable-training-of-ai-models/%E2%86%97).

For instance, there is no way to guarantee that a good actor who trained an  AI satisfying some safety requirements has actually deployed that same model, nor is it possible to detect if a malicious actor is serving a harmful model. This is due to the lack of transparency and technical proof that a specific and trustworthy model is indeed loaded in the backend.

This need for technical answers to the AI governance challenges has been expressed in the highest spheres. For instance, the [White House Executive Order on AI Safety and Security](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/) has highlighted the need to develop privacy-preserving technologies and the importance of having confidential, transparent, and traceable AI systems.

## Hardware-backed security today

Fortunately, modern techniques in cryptography and secure hardware technology provide the building blocks to provide verifiable systems that can enforce AI governance policies. For example, unfalsifiable cryptographic pr

... (truncated, 13 KB total)
Resource ID: b52975eb93ce5be5 | Stable ID: ZTk1ZmM5OD