The growing use of artificial intelligence (AI) by police forces requires new vigilance on the part of courts and the public about finding the right balance between civil liberties and public safety, warns a professor at the University of California-Davis School of Law.
Law enforcement has been using computers for decades to handle large amounts of investigative information, but new technology such as facial recognition, Shot Spotter, financial anomaly detectio, and automated license plate camera readers, has allowed police to increase the scale and speed of processing information, writes Elizabeth E. Joh, a professor of criminal law and procedure, constitutional law, and policing at the U.C. Davis School of Law
That“warrants new scrutiny”— especially since many communities are unaware of the extent of advanced technology used by their law enforcement agencies, Joh wrote in an essay for Viewpoints, a newsletter published by the Association for Computing Machinery.
For example, the Chicago Police Department uses an algorithm that identifies which city residents may be at especially high risk as perpetrators or victims of gun violence.
Police in Fresno, Ca., “piloted an alert system that tells an officer whether the driver the police officer just pulled over to the side of the road poses a threat.”
Dozens of other police departments use a program called PredPol, a machine learning algorithm that maps granular 500 x 500-foot sections of the city where crime is “more likely to occur,” Joh reports.
One danger of the growing reliance on technology is that if the tools malfunction or are used incorrectly, serious consequences can result, according to Joh.
Artificial Intelligence removes “human checks” where police would traditionally enter a situation using their senses and basic skills to interpret what they are seeing.
Joh gives the example of a bystander reporting abuse.
How would the AI detect truth or lies, she asked?
AI also allows the police to hide their presence in communities, “vastly expanding the pool of people and activities the police can watch.” Even simple license plate readers identify hundreds of plates a minute.
Moreover, some cameras are connected to the internet, opening up a possibility for hacker activity.
“Worse, some [cameras] are leaking sensitive data about vehicles and their drivers — and many have weak security protections that make them easily accessible,” Tech Crunch writes.
Joh noted that legal rulings already have given a green light to wider law enforcement use of the cellphone location technology.
In Carpenter v. United States, the Supreme Court in 2018 ruled the FBI was able to access the defendant’s location through cell phone records that showed over 12,000 pings around the time of a robbery, without requiring a warrant.
Even though Carpenter’s case is not explicitly about AI, it hits notes that are relevant to privacy and information gathering.
“The Court was concerned about tools that had extended beyond “augmenting the sensory faculties bestowed upon [the police] at birth,’” Joh explained.
Automated, third-party information-gathering by police challenges traditional notions of privacy, she wrote.
“The conventional view is that no matter whether the government has taken one of a thousand snapshots of your face, you have given up your privacy rights,” wrote Joh.
The author concludes with a quote from the Carpenter ruling: “Unlike the ‘nosy neighbor who keeps an eye on comings and goings,’ the technology used by the police was ‘ever alert, and [its] memory is nearly infallible.’”
Courts and civil liberties groups will therefore need to confront a worrying new reality, the paper said.
“[The] artificial intelligence tools being adopted by police departments.. are cheap, powerful, ubiquitous, automated and invasive of privacy in ways that are novel and alarming,” warned Joh.
The full paper can be accessed here.
This summary was prepared by Andrea Cipriano, a TCR staff writer.