New Research Casts More Doubt on Risk Assessment Tool

Print More
computer

Photo by Cristoph Scholz via Flickr

Two computer scientists have cast more doubt on the accuracy of risk assessment tools.

After comparing predictions made by a group of untrained adults to those of the risk assessment software COMPAS, authors found that the software “is no more accurate or fair than predictions made by people with little or no criminal justice expertise,” and that, moreover, “a simple linear predictor provided with only two features is nearly equivalent to COMPAS with its 137 features.”

In a study published Tuesday by Science Advances, Julia Dressel, a software engineer, and Hany Farid, a computer science professor at Dartmouth, concluded that “collectively, these results cast significant doubt on the entire effort of algorithmic recidivism prediction.”

COMPAS, short for Correctional Offender Management Profiling for Alternative Sanctions, has been used to assess more than one million criminal offenders since its inception two decades ago.

In response to a May 2016 investigation by Propublica that concluded the software is both unreliable and racially biased, Northpointe defended its results, arguing the algorithm discriminates between recidivists and non recidivists equally well for both white and black defendants. Propublica stood by its own study, and the debate ended in a stalemate.

Rather than weigh in on the algorithm’s fairness, authors of this study simply compared the software’s results to that of “untrained humans,” and found that “people from a popular online crowdsourcing marketplace—who, it can reasonably be assumed, have little to no expertise in criminal justice—are as accurate and fair as COMPAS at predicting recidivism.”

Each of the untrained participants were randomly assigned 50 cases from a pool of 1000 defendants, and given a few facts including the defendant’s age, sex and criminal history, but excluding race. They were asked to predict the likelihood of re-offending within two years. The mean and median accuracy of these “untrained humans” to be 62.1% and 64%, respectively.

Authors then compared these results to COMPAS predictions for the same set of 1000 defendants, and found the program to have a median accuracy of 65.2 percent.

These results caused Dressel and Farid to wonder about the software’s level of sophistication.

Although they didn’t have access to the algorithm, which is proprietary information, they created their own predictive model with the same inputs given participants in their study.

“Despite using only 7 features as input, a standard linear predictor yields similar results to COMPAS’s predictor with 137 features,” the authors wrote. “We can reasonably conclude that COMPAS is using nothing more sophisticated than a linear predictor or its equivalent.”

Both study participants and COMPAS were found to have the same level of accuracy for black and white defendants.

The full study, “The accuracy, fairness, and limits of predicting recidivism,” was published in Science Advances and can be found online here. This summary was prepared by Deputy Editor Victoria Mckenzie. She welcomes readers’ comments.

4 thoughts on “New Research Casts More Doubt on Risk Assessment Tool

  1. I feel the title of this article is very misleading. There are no other “Tools” mentioned here and this article specifically is about the COMPAS. Many other jurisdictions have provided their tool questions and development for the public to actually see and there are continued validation studies and outcomes associated with these reports. Even the linked research article only talks about further analysis of the control group vs. COMPAS.

  2. These are important observations, especially in our technology-obsessed society. Shinier gadgets do not imply progress, and we should have enough gray matter within us not to trust black boxes when it comes to risk assessment. Transparency is key, and factors and their weights, as well as how they sum to the total score, need more nuance and support and less ‘trust me I know what I’m doing’ (h/t Sledge Hammer!). Thank you, Victoria, for a helpful summary! Where is David Hume when we need him most?

  3. Pingback: New Research Casts More Doubt on Risk Assessment Tool | The Crime Report – bythetime.org

Leave a Reply

Your email address will not be published. Required fields are marked *