Detroit Lawsuit Charges Facial Recognition Bias

Print More

Photo by EFF Photos via Flickr.

The city of Detroit is being sued by a man who claims he was wrongfully arrested as a result of misleading facial recognition technology used by police, according to Courthouse News Service. 

Robert Williams, a resident of a Detroit suburb, raised questions in a federal lawsuit filed Tuesday filed Iwhether facial recognition technology is “too flawed” to ensure that innocent people aren’t mistakenly identified as criminals — especially people of color, like himself.  He cited mounting research indicating the program’s algorithms were primarily trained using Caucasian faces and have high error rates with other races, Courthouse News and a recent Harvard University Graduate School of Arts and Sciences article detail.  

Williams’ criticism is one that many agree with, inspiring him to write an op-ed for the Washington Post last June where he described the January 2020 ordeal of his arrest. 

He recalled that when police showed him a blurry surveillance camera photo of a Black man stealing watches from a Shinola store during interrogation, Williams couldn’t help but laugh and deny that it was him. 

“I picked up the piece of paper, put it next to my face and said, ‘I hope you guys don’t think that all black men look alike,’” Williams wrote. 

In his case, the detectives relied solely on a facial recognition match from the surveillance camera footage to a photo of Williams from an expired driver’s license.  The case was dropped by prosecutors less than two weeks later, citing insufficient evidence, Courthouse News details. 

Inequality in Facial Recognition Algorithms

Williams is one of several similar cases used by opponents of the technology. While popular facial recognition programs like Clearview AI boast of an accuracy rate over 90 percent, new research is exposing error rates across different demographic groups — particularly regarding people of color, according to a recent Harvard University Graduate School of Arts and Sciences article.  

Studies have found that people of color are up to 100 times more likely to be misidentified using this type of technology compared to white men, considering the algorithms were trained to analyze photos of Caucasian faces. 

Kevin E. Early, a criminologist and associate professor of sociology at the University of Michigan Dearborn said in a telephone interview with Courthouse News that the technology simply isn’t up to par. 

“It hasn’t been perfected in terms of shades, hues, colors,” he said. “It’s much more effective when you are looking at people who are not of color versus people who are of color,” Early said. 

Early further cited an M.I.T. study where “light-skinned men were only misidentified 0.8 percent of the time while dark-skinned women were more than 34 percent more likely to be matched in error,” according to Courthouse News.

Harvard Graduate School of Arts and Sciences researchers found that the demographic group with the least accuracy, when subjected to the facial recognition software algorithms, are Black women aged 18-30. Independent assessment by the National Institute of Standards and Technology (NIST) has confirmed these studies, finding that face recognition technologies across 189 algorithms are least accurate on women of color, the Harvard University article details. 

What’s worse, Early predicts financially distressed areas would be affected the most by that discrepancy.

“Persons of color who are primarily a large portion of the poor in America don’t have the resources to fight law enforcement,” he said.

City legislators are beginning to understand how damaging this technology can be when it results in inaccurate matches, as places like Boston and San Francisco have recently banned their police and local agencies from using the technology, citing racial discrimination. 

Steps Toward Change

Alex Najibi, a fifth-year Ph.D. candidate studying bioengineering at Harvard University’s School of Engineering and Applied Sciences, writes that there are many ways facial recognition technology can be adapted to be more equitable. 

First, he details that algorithms can be trained on diverse and representative datasets. He also writes that by mandating a standard of image quality to use for facial recognition analysis and having settings for photographing Black subjects, this can reduce false identifications. 

Najibi also recommends ethical auditing by other independent sources to hold facial recognition companies accountable. 

Legislature is also a good place to turn to, Najibi details, as the Safe Face Pledge and the 2019 Algorithmic Accountability Act have empowered companies, prioritized data privacy, and achieved some progress. 

“Face recognition remains a powerful technology with significant implications in both criminal justice and everyday life,” Najibi concludes. 

“Addressing racial bias within face recognition and its applications is necessary to make these algorithms equitable and even more impactful.”

Additional Reading: Facial Recognition Now Used in Over 1,800 Police Agencies: Report

Andrea Cipriano is a TCR staff writer.

Leave a Reply

Your email address will not be published. Required fields are marked *