New York City is taking steps to address algorithmic bias in city services. The City Council passed a bill that will require the city to address bias in algorithms used by the police department, courts, and dozens of city agencies, Vice reports. The bill would create a task force to figure out how to test city algorithms for bias, how citizens can request explanations of algorithmic decisions when they don’t like the outcome, and whether it’s feasible for the source code used by city agencies to be made publicly available.
Criminal justice reformers and civil liberties groups charge that despite claims of objectivity, algorithms reproduce existing biases, disproportionately targeting people by class, race, and gender. A Pro Publica investigation found that a risk assessment tool was more likely to mislabel black than white defendants. Studies have found facial recognition algorithms were less accurate for black and female faces.
Critics of predictive policing—which uses statistics to determine where cops should spend time on their beats—say it reinforces existing biases and brings cops back to already over-policed neighborhoods.
Rachel Levinson-Waldman of the Brennan Center of Justice said New York’s police department refuses to disclose the source code for the predictive policing program, claiming it would help criminals evade the cops. (Three academics argue in the New York Times that even imperfect algorithms improve the justice system.) The City Council on Tuesday approved the Right to Know Act, which requires changes to day-to-day interactions between police officers and those they encounter.
The measures drew opposition from criminal justice reform groups and the city’s largest officers’ union, the New York Times reports. Reformers said the bill omitted many common street encounters, including car stops and questioning by officers in the absence of any reasonable suspicion of a crime.