Report Shines Light on Predictive Policing Data

Print More

San Francisco police. Photo by Torbakhopper via Flickr

How are police data sullied by unconstitutional and racially biased police practices — so-called dirty data — infecting predictive policing systems?

A new report published by AI Now, an interdisciplinary research center at New York University focused on the social implications of artificial intelligence, focuses on 13 cities as case studies to air specific concerns about the basis for the systems’ algorithms predicting likely lawbreakers, Fast Company magazine reports.

The best-known example in the report is Chicago’s, where allegedly unlawful stop-and-frisk records and other data problems cited in the Justice Department’s probe of the Laquan McDonald shooting had been baked into CPD’s Strategic Subject List of hundreds of thousands of people identified as high risk.

Other jurisdictions, also subject to DOJ consent decrees, include New Orleans, Milwaukee, and Maricopa County, Ariz. Rashida Richardson, director of policy research at AI Now, said it was difficult to find information regarding police data sharing practices – what data and with which other jurisdictions it is shared, as well as information on predictive policing systems.

Richardson says that HunchLab and PredPol are the two most common predictive policing systems of the 13 jurisdictions. IBM and Motorola also offer some type of predictive policing systems, while other jurisdictions develop their own in-house. It’s currently unknown how pervasive these automated systems are in the United States.

See Also: The Perils of Big Data Policing 

Leave a Reply

Your email address will not be published. Required fields are marked *