Citing concerns about potential racial bias, Pittsburgh suspended an algorithm policing program that predicted “hot spots” for criminal activity, and instead will focus any new data-driven programs on deploying social services, the Pittsburgh Post-Gazette reports. The Carnegie Mellon University-developed tool aimed to rely on data sources to predict where crime would occur and dispatch patrols to those areas. The city began the program in 2017 but has “no plans to restart it at this time,” Mayor Bill Peduto told the Pittsburgh Task Force on Public Algorithms, hosted by the University of Pittsburgh’s Institute for Cyber Law, Policy and Security.
The project was a partnership between Carnegie Mellon and the Pittsburgh Bureau of Police, the Pittsburgh Department of Innovation and Performance, and the Pittsburgh Department of Public Safety. Crime dropped during the program but only four arrests were made during 20,000 hot spot patrols. A 2018 research paper on the “hot spot” program noted “even a small amount of effort and resources invested in such a program can lead to measurable and practically significant reductions in crime.” Chris Deluzio, policy director for Pitt Cyber and a task force member, said the group’s concern was two-fold: the use of the algorithm could replicate any existing pattern of bias, and the lack of transparency around its use.