Skip to content
Longterm Wiki
Back

ProPublica: COMPAS Investigation

web

A foundational piece of investigative journalism that became a touchstone in AI ethics and governance discussions; essential reading for understanding real-world consequences of deploying biased algorithms in high-stakes public-sector decision-making.

Metadata

Importance: 82/100news articleprimary source

Summary

ProPublica's landmark 2016 investigative report exposing racial bias in the COMPAS algorithm used to predict recidivism risk in criminal sentencing. The investigation found that Black defendants were nearly twice as likely as white defendants to be falsely flagged as future criminals, while white defendants were more often incorrectly labeled low risk. This work sparked a major public debate about algorithmic fairness, transparency, and accountability in high-stakes automated decision-making.

Key Points

  • COMPAS recidivism risk scores were found to exhibit significant racial disparities: Black defendants falsely flagged as high-risk at roughly twice the rate of white defendants.
  • The algorithm's inner workings were proprietary and opaque, raising serious concerns about due process when such scores influenced sentencing decisions.
  • The investigation highlighted the 'automation bias' risk: judges and courts may over-rely on algorithmic outputs without adequate scrutiny.
  • Sparked a major academic and policy debate about whether algorithmic fairness metrics are mathematically compatible with each other (the 'impossibility theorem' of fairness).
  • Demonstrated that deploying AI in high-stakes, life-affecting domains without proper accountability mechanisms can entrench and scale existing societal biases.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202632 KB
­

Machine Bias — ProPublicaGlobeArrow RightCaretClose [Skip to content](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing#wp--skip-link--target)

On a spring afternoon in 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an unlocked kid’s blue Huffy bicycle and a silver Razor scooter. Borden and a friend grabbed the bike and scooter and tried to ride them down the street in the Fort Lauderdale suburb of Coral Springs.

Just as the 18-year-old girls were realizing they were too big for the tiny conveyances — which belonged to a 6-year-old boy — a woman came running after them saying, “That’s my kid’s stuff.” Borden and her friend immediately dropped the bike and scooter and walked away.

But it was too late — a neighbor who witnessed the heist had already called the police. Borden and her friend were arrested and charged with burglary and petty theft for the items, which were valued at a total of $80.

Compare their crime with a similar one: The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store.

Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile.

Yet something odd happened when Borden and Prater were booked into jail: A computer program spat out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.

Two years later, we know the computer algorithm got it exactly backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars’ worth of electronics.

Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation. They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts — as is the case in Fort Lauderdale — to even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing.

Rating a defendant’s risk of future crime is often done in conjunction with an evaluation of a defendant’s rehabilitation needs. The Justice Department’s National Institute of Corrections now encourages the use of such combined assessments at every stage of the criminal justice process. And a landmark sentencing [reform bill](https://www.congress.gov/bill/114th-congress/senate-bill/2123/text) currently pending in Congress would mandate the use of such assessments in federal prisons.

#

... (truncated, 32 KB total)
Resource ID: 81813c9c33253098 | Stable ID: NzBiZGQ1YT