Back
A 2024 University of Washington study
webwashington.edu·washington.edu/news/2024/10/31/ai-bias-resume-screening-r...
Empirical research documenting discriminatory outcomes from LLM-based hiring tools; relevant to AI deployment governance, fairness auditing, and the gap between widespread AI adoption and regulatory oversight in high-stakes decision-making contexts.
Metadata
Importance: 62/100news articleprimary source
Summary
A University of Washington study tested three open-source large language models on over 550 real-world resumes and found significant racial, gender, and intersectional bias: LLMs favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names. The research highlights risks of deploying AI in hiring without adequate regulation or auditing.
Key Points
- •Three state-of-the-art open-source LLMs showed strong racial bias, favoring white-associated names in resume ranking 85% of the time.
- •Female-associated names were favored only 11% of the time, and Black male-associated names were never ranked above white male-associated names.
- •Study used 550+ real-world resumes at scale, investigating intersectionality across both race and gender simultaneously.
- •An estimated 99% of Fortune 500 companies use some form of hiring automation, yet almost no regulatory auditing of these AI systems exists.
- •Outside of a New York City law, there is currently no independent regulatory audit mechanism for AI-based hiring tools.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Driven Institutional Decision Capture | Risk | 73.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20268 KB
[Skip to content](https://www.washington.edu/news/2024/10/31/ai-bias-resume-screening-race-gender/#primary)
All the UWCurrent siteSearch scope
All the UWCurrent site
Enter search text
[**UWNEWS**](https://www.washington.edu/news)
 University of Washington research found significant racial, gender and intersectional bias in how three state-of-the-art large language models, or LLMs, ranked resumes. Photo: [Alejandro Escamilla/Unsplash](https://unsplash.com/photos/macbook-air-on-brown-wooden-table-N7XodRrbzS0)
The future of hiring, it seems, is automated. Applicants can now [use artificial intelligence bots to apply to job listings by the thousands](https://www.404media.co/email/df2c731a-b61b-4ed1-91c1-10874e9efe9e/?ref=daily-stories-newsletter). And companies — which have long automated parts of the process — are now [deploying the latest AI large language models](https://www.cnbc.com/2023/10/11/microsoft-amazon-among-the-companies-shaping-ai-enabled-hiring-policy.html) to write job descriptions, sift through resumes and screen applicants. An estimated 99% of Fortune 500 companies now [use some form of automation in their hiring process](https://www.forbes.com/sites/janehanson/2023/09/30/ai-is-replacing-humans-in-the-interview-processwhat-you-need-to-know-to-crush-your-next-video-interview/).
This automation can boost efficiency, and some claim it can make the hiring process less discriminatory. But new University of Washington research found significant racial, gender and intersectional bias in how three state-of-the-art large language models, or LLMs, ranked resumes. The researchers varied names associated with white and Black men and women across over 550 real-world resumes and found the LLMs favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names.
The team [presented its research](https://ojs.aaai.org/index.php/AIES/article/view/31748/33915) Oct. 22 at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society in San Jose.
“The use of AI tools for hiring procedures is already widespread, and it’s proliferating faster than we can regulate it,” said lead author [Kyra Wilson](https://kyrawilson.github.io/me/), a UW doctoral student in the Information School. “Currently, outside of [a New York City law](https://www.wsj.com/business/new-york-city-passed-an-ai-hiring-law-so-far-few-companies-are-following-it-7e31a5b7), there’s no regulatory, independent audit of these systems, so we don’t know if they’re biased and discriminating based on protected characteristics such as race and gender. And because a lot of these systems are proprietary, we are limited to analyzing how they work by approximating real-world systems.”
Previous studies have found [ChatGPT exhibits racial]
... (truncated, 8 KB total)Resource ID:
398daf2d4c6eca6e | Stable ID: NDc4MDljYW