Skip to content
Longterm Wiki
Back

MOU with US AI Safety Institute

government

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: NIST

A landmark 2024 government announcement establishing formal pre-deployment model access and safety evaluation collaboration between the U.S. government and leading frontier AI labs, relevant to AI governance and oversight mechanisms.

Metadata

Importance: 62/100press releasenews

Summary

The U.S. AI Safety Institute (NIST) announced Memoranda of Understanding with Anthropic and OpenAI in August 2024, establishing formal frameworks for pre- and post-deployment access to major AI models. These agreements enable collaborative research on capability evaluations, safety risk assessment, and mitigation methods, representing the first formal government-industry partnerships of this kind in the U.S.

Key Points

  • NIST's U.S. AI Safety Institute gains access to major AI models from Anthropic and OpenAI before and after public release.
  • Agreements enable collaborative research on evaluating AI capabilities, assessing safety risks, and developing risk mitigation methods.
  • The Institute will provide feedback to companies on potential safety improvements, coordinating with the UK AI Safety Institute.
  • Agreements build on the Biden-Harris Executive Order on AI and voluntary commitments from leading AI model developers.
  • Represents the first formal government-industry AI safety collaboration agreements of this type in the United States.

Cited by 8 pages

Cached Content Preview

HTTP 200Fetched Mar 15, 20264 KB
U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation With Anthropic and OpenAI | NIST 
 
 
 
 

 

 
 
 
 Skip to main content
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 Official websites use .gov 
 

 A .gov website belongs to an official government organization in the United States.
 

 
 
 
 
 
 
 Secure .gov websites use HTTPS 
 

 A lock ( 
 
 Lock 
 A locked padlock 
 
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
 

 
 
 
 
 
 
 

 
 
 
 
 https://www.nist.gov/news-events/news/2024/08/us-ai-safety-institute-signs-agreements-regarding-ai-safety-research

 
 

 

 
 
 
 
 

 

 

 
 
 
 

 
 

 

 
 
 
 
 
 
 
 
 NEWS 
 

 
 

 
 
 
 U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation With Anthropic and OpenAI

 
 

 
 
 
 These first-of-their-kind agreements between the U.S. government and industry will help advance safe and trustworthy AI innovation for all.

 
 

 
 
 
 August 29, 2024 
 

 
 

 
 
 
 
 
 Share

 
 
 
 
 Facebook 
 
 
 
 
 Linkedin 
 
 
 
 
 X.com 
 
 
 
 
 Email 
 
 
 
 
 
 

 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 GAITHERSBURG, Md. — Today, the U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI.

 Each company’s Memorandum of Understanding establishes the framework for the U.S. AI Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks. 

 “Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

 Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute. 

 The U.S. AI Safety Institute builds on NIST’s more than 120-year legacy of advancing measurement science, technology, standards and related tools. Evaluations under these agreements will further NIST’s work on AI by facilitating deep collaboration and exploratory research on advanced AI systems across a range of risk areas.

 Evaluations conducted pursuant to these agreements will help advance the safe, secure and trustworthy development and use of AI by building on the Biden-Harris adminis

... (truncated, 4 KB total)
Resource ID: 627bb42e8f74be04 | Stable ID: ZTFjMDAwNj