State and federal lawmakers are calling for new rules and investigations of the use of facial-recognition scans of driver’s license databases by Immigration and Customs Enforcement and other agencies, fueling a debate over the technology critics call a “massive breach of privacy and trust.”
The federal government helps fund and train local police agencies in body-camera use, but prohibits the devices’ use by federal units and joint task forces. Some cities are pulling out of such operations as a result.
The Murder Accountability Project fed information about thousands of Chicago homicide victims and the way they died into a computer, which spit out 51 strikingly similar cases involving women whose bodies were found in some of the poorest pockets of the city. “When you put the narratives together … it just screams serial killer,” said the project’s Thomas Hargrove.
The technology firm rejected a California law enforcement agency’s bid to install facial recognition technology in officers’ cars and body cameras, citing human rights concerns. Microsoft concluded it would lead to innocent women and minorities being disproportionately held for questioning.
Detectives often wait weeks for an analysis of DNA samples, but a “Rapid DNA” machine can analyze the DNA in a swab and produce a profile in less than two hours. The FBI is starting a project to connect “Rapid DNA” machines to its national DNA database.
In an unusual consensus, artificial intelligence researchers, activists, lawmakers and many of the largest technology companies agree that facial recognition software breeds bias, risks fueling mass surveillance and should be regulated.
The constitutional prohibition against unreasonable searches and seizures could prevent law enforcement from using the sophisticated surveillance technology made possible by artificial intelligence, according to a University of California-Davis law professor.