Can Artificial Intelligence Give Us Equal Justice?

Print More
artificial intelligence

Illustration by da zheng via Flickr

It’s “misleading and counterproductive” to block the use of machine-learning algorithms in the justice system on the grounds that some of them may be subject to racial bias, according to a forthcoming study in the American Criminal Law Review.

The use of artificial intelligence by judges, prosecutors, police and other justice authorities remains “the best means to overcome the pervasive bias and discrimination that exists in all parts of the deeply flawed criminal justice system,” said the study.

Algorithmic systems are used in a variety of ways in the U.S. justice system in practices ranging from identifying and predicting crime “hot spots” to real-time surveillance.

More than 60 kinds of risk assessment tools are currently in use by court systems around the country, usually to weigh whether individuals should be held in detention before trial or can be released on their own recognizance.

The risk assessment tools, which assign weights to data points such as previous arrests and the age of the offender, have come under fire from activists, judges, prosecutors, and some criminologists who say they are susceptible to bias themselves.

And public opinion reflects the same wariness. According to a 2019 survey cited in the report, 58 percent of Americans didn’t think that using an algorithm to make parole decisions was appropriate.

The study authors concede that many of the algorithms are far from perfect, but they argue that dropping them altogether would remove an important counterweight to human fallibility.

The report, entitled “The Solution to the Pervasive Bias and Discrimination in the Criminal Justice: Transparent Artificial Intelligence,” warned that “algorithmic aversion” accounted for much of the distrust for computerized systems that deploy artificial intelligence.

In their analysis of how algorithms are currently being used, authors Mirko Bagaric, of Swinburne Law School in Melbourne, Melissa Bull, Dan Hunter and Nigel Stobbsof the Queensland University of Technology, and Jennifer Svilar of the University of Tennesssee College of Law, said it makes more sense to fix some of the problems rather than throw out the tool completely.

According to the authors, because algorithms are made by humans but are processed with the speed and accuracy of a machine, they have the power to strengthen processes at every level of the criminal justice system by doing work quicker and with less inherent bias.

“They are always designed by humans and hence their capability and efficacy are, like all human processes, contingent upon the quality and accuracy of the design process and manner in which they are implemented,” the study said.

“Moreover, because algorithms do not have feelings, the accuracy of their decision-making is far more objective, transparent and predictable than that of humans.”

Many judges have resisted risk assessment instruments, arguing that their own experience on the bench is a better metric for assessing an individual’s likelihood of recidivating, but the authors say the algorithms are best used in conjunction with the assessment of prosecutors, police and jurists.

But without the additional help, discrimination is inevitable in the system, the authors wrote.

“While sentencing law and criminal law do not expressly target or discriminate against certain groups, much critical research has demonstrated that in practice sentencing systems operate in discriminatory ways,” said the report.

Discrimination occurs because authorities are often overwhelmed by huge caseloads and rely on racial or gender stereotypes without realizing it, in order to keep the process moving.

“Harsher penalties for certain groups within the community cannot be uncoupled from the reality that sentencing discretion, which is part of the judicial decision making process, unavoidably leads to sentences based, at least in part, on the personal predispositions of judges,” said the report.

The problem extends beyond race. Preconceived biases about age, poverty, cultural values, political beliefs and experience in the job all play a part in the decisions made at every level of the criminal justice system.

“People assume that ‘their judgments are uncontaminated’ by implicit bias, but all people, including judges, are influenced by their life journey and ‘are more favorably disposed to the familiar, and fear or become frustrated with the unfamiliar,’” said the report.

The Google Map Model

The authors use Google Maps and weather forecasting as a model to explain what goes into an algorithm. Google Maps uses large amounts of geospatial information from around the world to accurately predict the best route for a user. Similarly, weather forecasting uses data from hundreds of thousands of observations, weather balloons and millions of satellites to give people the most accurate prediction of what to expect when they walk outside.

Another example is automatic piloting. The algorithm in modern day planes allows pilots to fly on autopilot, but still requires them to be trained on how to fly a plane should the algorithm fail.

Algorithmic tools such as Compstat and PredPol, although facing debates on their accuracy, suggest that “predictive policing systems are statistically more likely to predict when and where some crimes will occur than human crime analysts” alone, said the report.”

Algorithms can form relationships between two variables, leading critics to fear that discrimination towards disadvantaged groups will persist even in the use of algorithms.

“Even when algorithms are deemed successful, there is still concern that they will ‘perpetuate “racial disparities within the criminal-justice system,’” the report acknowledged.

There’s also concern that using algorithms to conduct processes like risk assessment will negatively harm younger people, as age can affect the idea of recidivism or crime rates. If age is used as a variable within the algorithm that increases the chance of crime, a younger person could potentially have a higher chance of being recommended a higher bail amount or risk assessment.

Another problem with algorithms is when they’re used in video surveillance or facial recognition software.

Although some surveillance can be used to detect suspects or notify police about signs of aggression, the use of algorithms to detect faces and constantly surveil the community brings up the concern of privacy.

However the authors noted that in today’s digital society, privacy is already a tricky concept.

Even so, surveillance footage will often only be viewed “when a computer detects something suggesting that a crime is being committed or that an offender has been recognized,” said the authors.

“Thus, for the most part, individuals will be potentially observable, not constantly observed or monitored by law enforcement.”

There’s also an overall lack of transparency in how the algorithms operate. But the authors note that judicial discretion—without the use of data—may lead to equally “questionable decisions.”

“Algorithms can replicate all of the high-level human processing but have the advantage that they process vast sums of information far more quickly than humans,” said the authors, emphasizing algorithms’ strength, in that they process data at a rate that humans simply can’t match.

“Research suggests that although risk assessment and risk and needs assessment tools are far from perfect, the best instruments, administered by well-trained staff, can predict re-offending with 70 percent accuracy,” said the report.

The authors recommended several reforms to modern criminal justice use of algorithms, including increasing the transparency of how the algorithms are designed, more fairness in the algorithms themselves and making the algorithms more consistent so that decisions are reliable and reduce disparities between different groups.

“Algorithms can help reform the criminal justice system, but they ‘must be carefully applied and regularly tested to confirm that they perform as intended,’” said the authors.

They aren’t perfect, “but they are superior to judgments made by humans.”

The authors are: Mirko Bagaric, Dean of Law at Swinburne Law School in Melbourne; Jennifer Svilar, J.D., University of Tennessee College of Law and a former deputy division chief at the National Security Agency; Melissa Bull, law professor at the Queensland University of Technology Law School; Dan Hunter, Dean of the faculty of Law, Queens University of Technology; and Nigel Stobbs, Ph.D., Queensland University of Technology Law School.

The full report can be downloaded here.

See more: ‘You Can’t Solve Domestic Terrorism with an Algorithm. 

Emily Riley is a TCR Justice Reporting intern.

One thought on “Can Artificial Intelligence Give Us Equal Justice?

  1. Like many defenders of algorithmic decision-making in the justice system, this article apparently (I haven’t read the original) avoids the central problem. All justice decision-making begins with law enforcement’s awareness of, and response to behaviors in the community. Justice data are affected by racial and class bias from that point forward, and algorithms are blind to it. The authors give away the problem when they refer to algorithms as predicting “reoffending.” No. They predict rearrest and legal processing using data from the prior actions of police and courts. Useful, but tainted by bias in undetectable ways.

Leave a Reply

Your email address will not be published. Required fields are marked *