Algorithms are being used in bail, sentencing and parole decisions, but there is little oversight and transparency regarding how they work, reports Wired. Courts and corrections departments use algorithms to determine a defendant’s “risk”, from the probability that an individual will commit another crime to the likelihood a defendant will appear for a court date. Typically, government agencies do not write their own algorithms; they buy them from private businesses. This often means the algorithm is proprietary, meaning only the owners, and to a limited degree the purchaser, can see how the software makes decisions.
There is no federal law that sets standards or requires the inspection of these tools, the way the FDA does with new drugs. Given their opaque nature, how does a judge weigh the validity of a risk-assessment tool if he or she cannot understand its decision-making process? How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant’s risk to society? The legal community has never fully discussed the implications of algorithmic risk assessments. Now, attorneys and judges are grappling with the lack of oversight and impact of these tools after their proliferation.