Facial Recognition: Beware the ‘Long Arm of the Algorithm’

Print More
facial recognition

Photo by Endstation jetzt via Flickr

Facial recognition is unlikely to provide sufficient evidence to determine a guilty suspect in court without the intervention of human judgment, say two researchers from the United Kingdom.

In a case study analyzing the South Wales Police Department’s automated facial recognition (AFR) pilot program called “AFR Locate,” found serious limitations and errors when the program’s algorithms were used.

“AFR raises the risk that scientific or algorithmic findings could usurp the role of the legitimate decision-maker,” concluded the authors of the study.

“We cannot allow police officers, the judge or the jury to be reduced to the long arm of the algorithm. The former have the decision-making prerogative.

“Their decisions may not ultimately be final; but only they can make these decisions.”

Automated Facial Recognition, known simply as facial recognition in the U.S., is increasingly used by law enforcement authorities as a tool to identify suspects in complex criminal investigations.

The authors—Kyriakos N. Kotsoglou, Senior Lecturer of Law at the University of Northumbria at Newcastle School of Law; and Marion Oswald, Vice-Chancellor’s Senior Fellow in Law of the University of Northumbria at Newcastle—say the technology shows great promise, but needs to be treated with caution when its results are used in courtroom settings.

The “AFR Locate” program in South Wales involved [the] deployment of CCTV (closed-circuit TV) surveillance cameras to capture digital images of members of the public, which were then compared with watch-lists compiled for the purposes of the pilot.

The images were examined for “biometric data” or facial features, like spacing between the eyes, length of the bridge of the nose, the contour of the lips, characteristics in the ears, on the chin, etc.

That information was then cross-referenced between a database with a set of digital images, like the FBI’s Most Wanted list to catch potentially dangerous criminals.

A “similarity score” was created, where a higher number indicates a higher probability of likeness.

According to the AFR Locate website, if the system does not render a match with a civilian’s face and a wanted criminal’s image, the photo is then deleted.

However, if the algorithmic process does identify a match between a face captured on CCTV cameras and a headshot on the watchlist, then a human being must get involved and make an assessment by reviewing the AFR’s “match,” the paper said.

The researchers argued that it’s important to have a “human eye” to double-check the work that the AFRs are doing because people’s lives and reputations are potentially at stake.

The length of the pilot program was unclear. But the program’s website says that it “has been used to assist in the identification of hundreds of suspects across South Wales and more recently assisted in identifying the most vulnerable in our communities.”

“It is increasingly likely that we will soon see in England and Wales police interventions, such as stop-and-search or arrest, based [partially or exclusively] upon live or after-the-event AFR matching,” the paper said.

Courts in the United Kingdom have ruled that the use of AFR Locate was not an “intrusive act”, in the sense of physical entry onto property, contact or force, and therefore fell within the common law duty to prevent and detect crime, and corresponding common-law power to take steps to prevent and detect crime.” according to the paper explains.

AFR: Trigger for Intervention or Arrest?

 Nevertheless, it still raised concerns about accuracy as well as privacy, the authors wrote.

How the AFR coders and engineers set the “default threshold value” at which the computer system alerts for a similarity could increase the likelihood of “false positives,” where innocent people are pegged as wanted criminals.

Noting citing error rates and other uncertainties in the system, the authors suggested that image matches identified by AFR may not be admissible as evidence.

These fears are similar to that of false eyewitness identification, the authors explained, where a jury isn’t fully educated on the operations of memory under stress, leading the jury to believe that eyewitness testimony is more powerful and reliable than some other factual evidence that may prove otherwise.

Building in checks and balances is one way to ensure the technology not only is useful for identifying suspects but for developing and proving criminal cases, the authors said.

“Given that simple real-life situations including face recognition (or cognition in general) are analytically intractable, we need a ‘human eye’ to decide whether an algorithmically generated match can be declared valid,” the authors concluded.

Editor’s Note: Clare Garvie, a fellow at the Georgetown University Center on Privacy and Technology, will examine the implications of facial technology at the Harry Frank Guggenheim Symposium on Crime in America as a panel session at John Jay College Friday Feb. 21 (4:00 pm-5:00 pm EDT). Watch the livestream of the session here, or watch for TCR’s coverage.

The complete paper can be downloaded here.

Additional Reading: Facial Recognition Software Misreads African-American Faces: Study

Andrea Cipriano is a staff writer for The Crime Report

Leave a Reply

Your email address will not be published. Required fields are marked *