NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in AI
governmentCredibility Rating
Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.
Rating inherited from publication venue: NIST
This NIST publication is a key U.S. government standards document on AI bias management, relevant to researchers and practitioners working on fairness, accountability, and governance frameworks for AI systems.
Metadata
Summary
NIST Special Publication 1270 provides a framework for identifying and managing bias throughout the AI lifecycle, recognizing that biases embedded in AI systems can cause harmful outcomes regardless of developer intent. Published in March 2022, it addresses how ambiguous human concepts become quantified and codified in AI decision-making, undermining public trust. It serves as a foundational document within the broader NIST AI Series on responsible AI development.
Key Points
- •Identifies bias as endemic across AI technology processes, capable of producing harmful outcomes even when organizational intent is responsible.
- •Focuses on the full AI lifecycle, providing guidance on where and how biases emerge and can be managed or mitigated.
- •Recognizes that digital interactions commodify human behavior, turning ambiguous concepts into categorical decisions affecting people's lives.
- •Positioned as a standard-setting document intended to build public trust in AI systems through systematic bias management.
- •Part of the NIST AI Series, meant to be read alongside other NIST AI publications including the AI Risk Management Framework.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Colorado Artificial Intelligence Act | Policy | 53.0 |
Cached Content Preview
Towards a Standard for Identifying and Managing Bias in Artificial Intelligence | NIST
Skip to main content
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock (
Lock
A locked padlock
) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
https://www.nist.gov/publications/towards-standard-identifying-and-managing-bias-artificial-intelligence
PUBLICATIONS
Towards a Standard for Identifying and Managing Bias in Artificial Intelligence
Published
March 15, 2022
Author(s)
Reva Schwartz , Apostol Vassilev , Kristen K. Greene , Lori Perine , Andrew Burt, Patrick Hall
Abstract
As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people's lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI). SP 1270 is a NIST Artificial Intelligence publication and should be read in conjunction with all publications in the NIST AI Series, which was established in January 2023.
Citation
Special Publication (NIST SP) - 1270
Report Number
1270
NIST Pub Series
Special Publication (NIST SP)
Pub Type
NIST Pubs
Download Paper
https://doi.org/10.6028/NIST.SP.1270
Local Download
Keywords
bias, trustworthiness, AI safety, AI lifecycle, AI development
Information technology , Fundamental AI and Artificial intelligence
Citation
Schwartz, R.
, Vassilev, A.
, Greene, K.
, Perine, L.
, Burt, A.
and Hall, P.
(2022),
Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, Special Publication (NIST SP), National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.SP.1270, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934464
(Accessed March 14, 2026)
Additional citation formats
Google Scholar
DOI
BibTeX
RIS
Issues
If you have any questions about this publication or are havi
... (truncated, 3 KB total)1c6119776766d52d | Stable ID: MjY2N2NlYm