Skip to content
Longterm Wiki
Back

Gender, Race, and Intersectional Bias in AI Resume Screening via Language Model Retrieval

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Brookings Institution

Relevant to AI safety discussions around real-world deployment harms, fairness evaluation methodologies, and the governance of high-stakes AI systems in employment contexts; useful for policy and responsible deployment sections.

Metadata

Importance: 62/100organizational reportanalysis

Summary

This Brookings Institution study examines how AI-powered resume screening systems using large language models exhibit measurable gender and racial biases, with intersectional effects that compound disadvantages for certain demographic groups. The research demonstrates that retrieval-based LLM hiring tools can systematically rank candidates differently based on protected characteristics, raising concerns about fairness and legal compliance in automated hiring. It calls for greater scrutiny, auditing standards, and governance frameworks for AI deployment in high-stakes employment decisions.

Key Points

  • LLM-based resume screening systems show statistically significant bias against women and racial minorities, with intersectional combinations producing amplified disparate outcomes.
  • Retrieval-augmented AI hiring tools can embed and perpetuate historical hiring biases present in training data without explicit discriminatory intent.
  • Intersectional bias (e.g., Black women vs. white men) is often worse than additive individual biases, highlighting gaps in single-axis fairness testing.
  • The study recommends mandatory algorithmic audits and transparency requirements for AI tools used in employment screening decisions.
  • Findings have direct policy relevance for regulators, employers, and AI developers navigating emerging AI governance frameworks like the EU AI Act.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202641 KB
Search

[Home](https://www.brookings.edu/)

#### Gender, race, and intersectional bias in AI resume screening via language model retrieval

- [Contact](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#)








##### Contact




Governance Studies Media Office


[gsmedia@brookings.edu](mailto:gsmedia@brookings.edu) [202.540.7724](tele:202.540.7724)

- [Share](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#)


  - Share

    - [Bluesky Streamline Icon: https://streamlinehq.comBluesky](https://bsky.app/intent/compose?text=Gender%2C+race%2C+and+intersectional+bias+in+AI+resume+screening+via+language+model+retrieval%20https%3A%2F%2Fwww.brookings.edu%2Farticles%2Fgender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval%2F%3Fb%3D1)

[Search](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#overlay-search)

[Sections](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#)

Sections

- [Why empirical investigations into AI use in hiring are limited](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#why-empirical-investigations-into-ai-use-in-hiring-are-limited-355)
- [The use of large language models in hiring](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#the-use-of-large-language-models-in-hiring-958)
- [Results](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#results-779)
- [Limitations of the current debiasing approaches](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#limitations-of-the-current-debiasing-approaches-985)
- [Ethical and equitable AI use in employment](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#ethical-and-equitable-ai-use-in-employment-214)
- [Conclusion](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#conclusion-261)
- [Appendix – Research Study Methodology](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#appendix-research-study-methodology-283)

- [Contact](https://www.brookings.edu/articles/gender-race-and-intersectional-bias-in-ai-resume-screening-via-language-model-retrieval/#)






##### Contact




Governance Studies Media Office


[gsmedia@brookings.edu](mailto:gsmedia@brookings.edu) [202.540.7724](tele:202.540.7724)

- [Share](https://www.brookings.edu/articles/gender-race-an

... (truncated, 41 KB total)
Resource ID: aa9bd39c247651f0 | Stable ID: ZTAxNmRiMW