Skip to content
Longterm Wiki
Back

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: NIST

Official NIST update on the AISIC consortium's first plenary, relevant to those tracking U.S. government AI safety governance structures, voluntary reporting frameworks, and public-private collaboration on AI risk management.

Metadata

Importance: 52/100press releasenews

Summary

The U.S. AI Safety Institute Consortium (AISIC) held its inaugural in-person plenary meeting in December 2024, bringing together 290+ member organizations to review progress across five AI safety focus areas. Key developments included voluntary risk reporting frameworks, chemical-biological misuse evaluations, and model safeguard assessments, all conducted in partnership with NIST's U.S. AI Safety Institute.

Key Points

  • AISIC was established in February 2024 and comprises 290+ companies, academic institutions, civil society groups, and state/local governments working on AI safety.
  • Five focus areas: generative AI risk management, synthetic content, evaluations, red-teaming, and model safety and security.
  • Voluntary Reporting Template (VRT) developed to help organizations share risk management data aligned with NIST AI 600-1 (Generative AI Profile).
  • Chem-bio misuse research advanced methods for evaluating foundation model risks across capability assessment, threat modeling, and evaluation methodology.
  • Consortium serves as a formal bridge between U.S. government (NIST/US AISI) and private sector on AI safety standards and research.

Cited by 3 pages

PageTypeQuality
US AI Safety InstituteOrganization91.0
AI Policy EffectivenessAnalysis64.0
Third-Party Model AuditingApproach64.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20264 KB
U.S. AI Safety Institute Consortium Holds First Plenary Meeting to Reflect on Progress in 2024 & Outline Research Priorities for 2025 | NIST 
 
 
 
 

 

 
 
 
 Skip to main content
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 Official websites use .gov 
 

 A .gov website belongs to an official government organization in the United States.
 

 
 
 
 
 
 
 Secure .gov websites use HTTPS 
 

 A lock ( 
 
 Lock 
 A locked padlock 
 
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
 

 
 
 
 
 
 
 

 
 
 
 
 https://www.nist.gov/news-events/news/us-ai-safety-institute-consortium-holds-first-plenary-meeting-reflect-progress-2024

 
 

 

 
 
 
 
 

 

 

 
 
 
 

 
 

 

 
 
 
 
 
 
 
 
 UPDATES 
 

 
 

 
 
 
 U.S. AI Safety Institute Consortium Holds First Plenary Meeting to Reflect on Progress in 2024 & Outline Research Priorities for 2025

 
 

 
 
 
 Representatives from the consortium’s member companies, organizations and local governments met for the first time in-person to discuss the group’s progress in supporting federal efforts to advance AI safety and spur continued American innovation.

 
 

 
 
 
 
 
 Share

 
 
 
 
 Facebook 
 
 
 
 
 Linkedin 
 
 
 
 
 X.com 
 
 
 
 
 Email 
 
 
 
 
 
 

 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 On Dec. 7, 2024, members of the U.S. AI Safety Institute Consortium (AISIC) gathered for the first time in-person at the University of Maryland to review the group’s work to date and plan how the consortium can continue to serve as a bridge between the technology industry, academia, and civil society and the United States government on critical issues of AI safety research, evaluations and standards. 

 The consortium comprises more than 290 member companies and organizations, as well as various local and state governments. These organizations are on the frontlines of creating and using the most advanced AI systems and developing the foundational scientific research that will help us better understand how to fully harness the benefits of AI, while mitigating potential risks. 

 Since its creation in February, the consortium has been hard at work advancing scientific inquiry and collaborative research across five key issue areas, including: (1) generative AI risk management, (2) synthetic content, (3) evaluations, (4) red-teaming, and (5) model safety and security. 

 This ongoing collaboration is done in close partnership with government scientists and technical experts from the National Institute of Standards and Technology’s (NIST) U.S. AI Safety Institute (US AISI) and AI Innovation Lab with the goal of helping to advance the science of AI safety and inform the U.S. government’s efforts to enable trustworthy AI innovation. 

 “It’s rare to see such a broad swath of companies, academic institutions, and civil society organizations selflessly working with one another and with the government in the service of a common goal,” said Under Secretary of C

... (truncated, 4 KB total)
Resource ID: 2ef355efe9937701 | Stable ID: YzI3MTQ5ZT