Skip to content
Longterm Wiki
Back

Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification - researchr publication

web

Foundational AI fairness paper demonstrating real-world bias in deployed commercial systems; critical reference for discussions of evaluation methodology, algorithmic accountability, and the social impacts of AI deployment.

Metadata

Importance: 82/100conference paperprimary source

Summary

Seminal 2018 study by Joy Buolamwini and Timnit Gebru auditing commercial facial analysis AI systems for accuracy disparities across gender and skin tone. The research found that darker-skinned females were misclassified at rates up to 34.7% higher than lighter-skinned males, exposing significant intersectional bias in deployed AI products from Microsoft, IBM, and Face++. This work became foundational in the AI fairness and algorithmic accountability movement.

Key Points

  • Commercial gender classifiers from major tech companies showed error rates up to 34.7% higher for darker-skinned women compared to lighter-skinned men
  • Introduced intersectional analysis combining race (skin tone) and gender, showing that examining each dimension separately obscures compounded disparities
  • Demonstrated that training datasets (like IJB-A and Adience) were heavily skewed toward lighter-skinned and male subjects
  • Directly prompted Microsoft, IBM, and others to improve their facial analysis APIs following the study's publication
  • Helped establish the field of algorithmic auditing as a methodology for evaluating real-world AI system fairness

Cited by 1 page

PageTypeQuality
Deep Learning Revolution EraHistorical44.0

Cached Content Preview

HTTP 200Fetched Feb 22, 20261 KB
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification - researchr publication 

 

 
 
 researchr 
 explore Tags 
 Journals 
 Conferences 
 Authors 
 Profiles 
 Groups 
 
 calendar New Conferences 
 Events 
 Deadlines 
 
 search search 
 You are not signed in 
 Sign in 
 Sign up 
 External Links

 DOI 
 DBLP 
 Google 
 Google Scholar 
 MSAS 
 Cite Key

 BuolamwiniG18 
 Statistics

 References: 0
 Cited by: 0
 Reviews: 0
 Bibliographies: 0
 PDF

 [Upload PDF for personal use] 
 Researchr

 Researchr is a web site for finding, collecting, sharing, and reviewing scientific publications, for researchers by researchers.

 
 Sign up for an account to create a profile with publication list, tag and review your related work, and share bibliographies with your co-authors.

 Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification

 Joy Buolamwini , Timnit Gebru . Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification . In Sorelle A. Friedler , Christo Wilson , editors, Conference on Fairness, Accountability and Transparency, FAT 2018, 23-24 February 2018, New York, NY, USA . Volume 81 of Proceedings of Machine Learning Research , pages 77-91 , PMLR, 2018. [doi] 

 Abstract 
 Authors 
 BibTeX 
 References 
 Bibliographies 
 Reviews 
 Related 
 
 
 Abstract

 Abstract is missing.

 

 About 
 Contact 
 Credits 
 Help 
 Web Service API 
 Blog 
 FAQ 
 Feedback 
 runs on Web DSL
Resource ID: 0a7e3d48ddc5a853 | Stable ID: OTRhZGJjNz