Skip to content
Longterm Wiki
Back

How Racial Bias Infected a Major Health-Care Algorithm

web

A prominent real-world case study of algorithmic bias in high-stakes AI deployment, frequently cited in discussions of fairness, proxy variables, and the societal risks of poorly audited automated decision systems in healthcare.

Metadata

Importance: 68/100news articleanalysis

Summary

This article examines a widely-used health care algorithm that systematically underestimated the medical needs of Black patients compared to white patients with the same health conditions, directing fewer resources to Black patients. The bias stemmed from using health care costs as a proxy for health needs, which reflected historical disparities in access to care rather than actual illness severity. The case illustrates how seemingly neutral algorithmic design choices can encode and amplify systemic racial inequities.

Key Points

  • A major health algorithm used by insurers and hospitals assigned Black patients lower 'risk scores' than equally sick white patients, reducing their access to care management programs.
  • The bias originated from using historical healthcare spending as a proxy for health need—spending that was lower for Black patients due to systemic barriers to care access, not better health.
  • The algorithm affected an estimated 200 million people in the U.S., demonstrating the scale at which biased AI systems can cause harm when deployed in critical sectors.
  • The case shows that algorithmic bias can be unintentional and arise from flawed proxy variables, making pre-deployment auditing and outcome monitoring essential.
  • Switching the target variable from costs to health outcomes could reduce the measured racial disparity by more than 80%, highlighting how design choices directly shape equity outcomes.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202613 KB
![Girls holding lollipops at a doctor's office](https://www.chicagobooth.edu/-/media/project/chicago-booth/chicago-booth-review/2019/october/chicago-booth-girls-racial-difference-cbr.jpg?cx=0.48&cy=0.43&cw=1880&ch=783&hash=F6CEFDDD2D6A0532A1187CA34FBD5A6C)

Pete Ryan

# How Racial Bias Infected a Major Health-Care Algorithm

- By [Jeff Cockrell](https://www.chicagobooth.edu/review/authors-experts/c/jeff-cockrell)
- October 25, 2019
- [CBR - Economics](https://www.chicagobooth.edu/review/economics)

As data science has developed in recent decades, algorithms have come to play a role in assisting decision-making in a wide variety of contexts, making predictions that in some cases have enormous human consequences. Algorithms may help decide who is admitted to an elite school, approved for a mortgage, or allowed to await trial from home rather than behind bars.

But there are well-publicized concerns that algorithms may perpetuate or systematize biases. And research by University of California at Berkeley’s [Ziad Obermeyer](https://www.chicagobooth.edu/review/authors-experts/o/ziad-obermeyer), [Brian Powers](https://www.chicagobooth.edu/review/authors-experts/p/brian-powers) of Boston’s Brigham and Women’s Hospital, [Christine Vogeli](https://www.chicagobooth.edu/review/authors-experts/v/christine-vogeli) of Partners HealthCare, and Chicago Booth’s [Sendhil Mullainathan](https://www.chicagobooth.edu/review/authors-experts/m/sendhil-mullainathan) finds that one algorithm, used to make an important health-care determination for millions of patients in the United States, produces racially biased results.

The algorithm in question is used to help identify candidates for enrollment in “high-risk care management” programs, which provide additional resources and attention to patients with complex health needs. Such programs, which can improve patient outcomes and reduce costs, are employed by many large US health systems, and therefore the decision of whom to enroll affects tens of millions of people. The algorithm assigns each patient a risk score that is used to guide enrollment decisions: a patient with a risk score in the 97th percentile and above is automatically identified for enrollment, while one with a score from the 55th to 96th percentiles is flagged for possible enrollment depending on input from the patient’s doctor.

Obermeyer, Powers, Vogeli, and Mullainathan find that black patients are on average far less healthy than white patients assigned the same score. For instance, for patients with risk scores in the 97th percentile of the researchers’ sample, black patients had on average 26 percent more chronic illnesses than white patients did. The result of this bias: black patients were significantly less likely to be identified for program enrollment than they would have been otherwise. Due to algorithmic bias, 17.7 percent of patients automatically identified for enrollment were black; without it, the researchers calculate, 46.5 percent would

... (truncated, 13 KB total)
Resource ID: dd87aea8332e4cfa | Stable ID: ZWQxYWY1MG