‘Algorithmic Bias’: The Next Challenge for Justice Reform

Print More
protest

Justice Rally, Toronto. Photo by Jason Hargrove via Flickr

As technology increasingly becomes an inseparable part of justice institutions, researchers have begun to address inequality and racial bias within the algorithmic designs that are being deployed to strengthen courts, policing and other components of the system.

According to a forthcoming paper in the Berkeley Technology Law Journal, the most effective way to address algorithmic bias is through a transformative justice framework.

“Data-driven technologies cannot be apolitical, ahistorical, or considered separate and distinct from social and power structures,” writes the paper’s author, Rashida Richardson, an Assistant Professor of Law and Political Science at Northeastern University.

“This is because technology, and scientific knowledge more generally, embeds and is embedded in social practices, identities, norms, conventions, discourse, instruments and institutions.”

The author begins by detailing how the lack of racial diversity within STEM companies, congressional offices, and even within technology policy careers and institutions like non-profits and think tanks have an impact on the way that the technology is created and used―resulting in biased algorithms that can distress communities and dominate the technology sector.

Richardson says the principal reason is that much of the algorithmic technology used today was built by a predominantly white workforce, inadvertently creating a technological “norm” when perceiving race.

“Whiteness is not perceived as a racial category, [as] other categories are,” she writes.

Exploring this concept through a justice-related lens, Richardson addresses “police crime data, which is a primary data source for data-driven technologies used in policing,” and details how algorithms used within the data are used to identify areas of high-crime.

However, Richardson writes, this ability comes at a price. The “concentrated sites of ‘disorder” identified by these algorithms are likely to be disadvantaged neighborhoods, populated by people of color, who as a result are subject to a over-policing.

In effect, the algorithms classified them as criminogenic, setting off a negative series of consequences for the people living there.

As an example, she cites the “broken windows” and “hot spot” strategies introduced by police over the past decades—that effectively create targeted neighborhoods, even though the threats to public safety may be confined to a much smaller area.

“In fact,” Richardson adds, “research suggests that the systematic effect of these …policing practices” is to enforce racial segregation.

See Also: Digital Policing Tools ‘Reinforce’ Racial Bias, UN Panel Warns

Similarly, Richardson writes that crime-focused geographic information systems (GIS) and other computer-based tools like CompStat “warrant greater scrutiny.”

Richardson cites how in 2017, Chicago neighborhoods were selected for ShotSpotter, ShotSpotter Connect, and Police Observation Devices because of police technology’s suggestion of high gun violence and homicide rates. The neighborhoods, namely Englewood and West Garfield Park, are “almost exclusively comprised of Black and Latinx residents.”

This, Richardson writes, is no accident, and further speaks to how “policing policies, practices, and tactics serve to reinforce racial segregation and its consequences.”

Looking Ahead to Change

“A transformative justice framework is necessary to adequately examine and redress algorithmic bias as well as improve the development of data-driven technologies and applications,” Richardson writes in her paper.

Nothing that courts have failed to provide transparency, oversight, or mechanisms for contesting algorithmic bias, Richard says more research is needed to bring these practices into the open.

“A transformative justice framework and praxis for data-driven technology development and policy can help us advance towards a future where technology and society are designed for collective belonging,” Richardson concludes.

Rashida Richardson is a current Assistant Professor of Law and Political Science at Northeastern University. She is also a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and a senior fellow in the Digital Innovation and Democracy Initiative at the German Marshall Fund.

The full paper can be accessed here.

Additional Reading: Detroit Lawsuit Charges Facial Recognition Bias 2021

See Also: Facial Recognition Software Misreads African-American Faces: Study 2019

Andrea Cipriano is a TCR staff writer.

Leave a Reply

Your email address will not be published. Required fields are marked *