Skip to content
Longterm Wiki
Back

Digital.gov: Overview of NIST Initiatives on AI Standards

government

This 2020 event predates the NIST AI Risk Management Framework (2023) and represents early U.S. government standardization efforts; useful background for understanding how federal AI governance thinking evolved institutionally.

Metadata

Importance: 38/100press releaseeducational

Summary

A Digital.gov event featuring NIST researchers presenting the organization's initiatives to develop AI standards, principles, and address critical issues in trustworthy AI development. The presentation covers NIST's role in measurement science, standards, and testing for AI technologies, emphasizing reliability, safety, and accuracy across industry, academia, and government. This reflects early institutional groundwork that preceded the NIST AI Risk Management Framework.

Key Points

  • NIST researchers Elham Tabassi and Mark Przybocki outline the agency's fundamental and applied research agenda for trustworthy AI development.
  • NIST's focus areas include measurement science, standards, and testing methodologies applicable to emerging AI technologies.
  • The initiative spans cross-sector collaboration involving industry, academia, and government stakeholders.
  • Core trustworthiness properties emphasized include reliability, safety, and accuracy in AI systems.
  • This event represents early U.S. federal government efforts to establish technical foundations for AI governance and standards.

Cited by 1 page

PageTypeQuality
NIST and AI SafetyOrganization63.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20264 KB
Overview of NIST Initiatives on Artificial Intelligence Standards, Principles, and Critical AI Issues | Digital.gov 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Skip to main content
 
 

 
 

 Skip to main content 

 
 
 
 
 
 
 
 
 
 
 
 Official websites use .gov 
 

 A .gov website belongs to an official government
 organization in the United States.
 

 
 
 
 
 
 
 Secure .gov websites use HTTPS 
A
 lock (
 
 
 Lock 
 
 Locked padlock
 
 
 ) or https:// means you’ve safely connected to
 the .gov website. Share sensitive information only on official,
 secure websites.
 

 
 
 
 
 
 
 

 

 

 

 
 
 
 
 

 
 

 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 Artificial intelligence (AI) has the potential to impact nearly all aspects of our society, including our economy, but the development and use of the new technologies it brings are not without technical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability, safety, and accuracy.

 Elham Tabassi and Mark Przybocki will provide an overview of ongoing National Institute of Standards and Technology (NIST) efforts supporting fundamental and applied research and standards for AI technologies.

 Speakers:

 

 
 
 
 

 
 Elham Tabassi is the chief of staff in the Information Technology Laboratory (ITL) at NIST. ITL, one of six research laboratories within NIST, supports NIST's mission to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life. ITL conducts fundamental and applied research in computer science and engineering, mathematics, and statistics that cultivates trust in information technology and metrology by developing and disseminating standards, measurements, and testing for interoperability, security, usability, and reliability of information systems.

 Mark Przybocki is the acting chief of the Information Access Division (IAD), one of seven technical divisions in ITL. In this capacity, he leads NIST collaborations with industry, academia, and other government agencies to foster trust in emerging technologies that make sense of complex (human) information, by developing improvements to the measurement science, managing technical evaluations, and contributing to standards. The IAD is home to the high profile Text Retrieval Conference (TREC), several biometric benchmarking programs, and a growing number of technical evaluations for emerging human language, natural language processing, and speech, image, and video analytics technologies. Mr. Przybocki’s current interests are in AI benchmarking, explainable AI, and bias across the AI development lifecycle.

 This talk is hosted by the AI Community of Practice (CoP) . This community aims to unite federal employees who are active in, or interested in AI policy, technology, standards, and programs to accelerate the thoughtful adoption of AI across the federal government. 

 

 
 
 
 W

... (truncated, 4 KB total)
Resource ID: 9f6acd0c1e030aa7 | Stable ID: ZjUzMDdkMG