Back
Research from Lehigh University
webnews.lehigh.edu·news.lehigh.edu/ai-exhibits-racial-bias-in-mortgage-under...
Relevant to AI safety discussions around algorithmic fairness, deployment risks in high-stakes domains, and the governance challenges of holding automated decision systems accountable for discriminatory outcomes.
Metadata
Importance: 52/100news articlenews
Summary
Lehigh University research investigates racial bias in AI-driven mortgage underwriting systems, finding that algorithmic decision-making in lending perpetuates or amplifies discriminatory outcomes against minority applicants. The study highlights how automated systems can encode and reproduce historical patterns of racial discrimination in financial services.
Key Points
- •AI mortgage underwriting systems demonstrate measurable racial bias, disadvantaging minority applicants in loan approval decisions.
- •Algorithmic automation does not eliminate human bias but may instead institutionalize and scale discriminatory patterns.
- •The research raises concerns about accountability gaps when consequential financial decisions are delegated to opaque AI systems.
- •Findings have implications for fair lending regulations and the need for bias auditing of AI systems in high-stakes domains.
- •Illustrates the broader challenge of deploying AI in domains with historically discriminatory practices without adequate bias mitigation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Driven Institutional Decision Capture | Risk | 73.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202611 KB
Search the Lehigh WebsiteSearchX

Credit: Moor Studio / iStock
# AI Exhibits Racial Bias in Mortgage Underwriting Decisions
Trinity Player
Listen now
Listen to this article now
English
Deutsch
Français
Español
中文
Italiano
10
10
1.0x
0.50.60.70.80.91.01.11.21.31.41.51.61.71.81.92.0
1.0x
**Powered by** [Trinity Audio](https://trinityaudio.ai/?utm_source=https%3A%2F%2Fnews.lehigh.edu&utm_medium=player%2520lin)
00:00
05:25
**Getting your** [**Trinity Audio**](https://trinityaudio.ai/) **player ready...**
LLM training data likely reflects persistent societal biases, but simple fixes can help, according to findings from Donald Bowen III, McKay Price and Ke Yang.
[](https://www.facebook.com/sharer/sharer.php?u=https://www2.lehigh.edu/ai-exhibits-racial-bias-in-mortgage-underwriting-decisions)[](https://twitter.com/intent/tweet?text=https://www2.lehigh.edu/ai-exhibits-racial-bias-in-mortgage-underwriting-decisions)[](https://www.linkedin.com/shareArticle?mini=true&url=https://www2.lehigh.edu/ai-exhibits-racial-bias-in-mortgage-underwriting-decisions)[](mailto:?subject=AI%20Exhibits%20Racial%20Bias%20in%20Mortgage%20Underwriting%20Decisions&body=https://www2.lehigh.edu/ai-exhibits-racial-bias-in-mortgage-underwriting-decisions)
[University Communications](https://news.lehigh.edu/author/x020)
Moor Studio / iStock
August 20, 2024
[Academics](https://news.lehigh.edu/tags/academics)
[College of Business](https://news.lehigh.edu/tags/college-of-business)
[Faculty](https://news.lehigh.edu/tags/faculty)
[Research](https://news.lehigh.edu/tags/research)
[Artificial Intelligence](https://news.lehigh.edu/tags/artificial-intelligence)
Putting AI to use in mortgage lending decisions could lead to discrimination against Black applicants, according to new research. But researchers say there may be a surprisingly simple solution to mitigate this potential bias.
In an experiment using leading commercial large language models (LLMs) to evaluate loan application data, Lehigh researchers found that LLMs consistently recommended denying more loans and charging higher interest rates to Black applicants compared to otherwise identical white applicants.
This discovery is particularly alarming given the historical and ongoing racial disparities in homeownership.
“This finding suggests that LLMs are learning from the data they are trained on, which includes a history of racial disparities in mortgage l
... (truncated, 11 KB total)Resource ID:
edca1d403eb2dff5 | Stable ID: MmQ0MjM4OW