Facial recognition software provided by the surveillance firm Clearview AI is being used by thousands of police officers and government employees across the country to conduct searches despite widespread criticism by privacy advocates, and in some cases without proper training or supervision, reports Buzzfeed News.
Particularly, it has solidified relationships with law enforcement agencies like police departments, federal security agencies like Immigration and Customs Enforcement and the U.S. Air Force, district attorneys’ offices, campus security offices at various universities, and many others, Buzzfeed said its investigation found.
Reporters learned that the facial recognition software, which has been criticized for programming that can lead to racial bias, has been deployed in over 340,000 searches, at 1,803 unique public agencies between 2018 and 2020.
“The data indicates that Clearview has broadly distributed its facial recognition software to federal agencies and police departments nationwide, offering the app to thousands of police officers and government employees, who at times used it without training or oversight,” the article said.
“Often, agencies that acknowledged their employees had used the software confirmed it happened without the knowledge of their superiors, let alone the public they serve.”
Before diving into the discussion around Clearview AI’s impact on personal privacy and it’s connection to law enforcement, the Buzzfeed News reporters detail Clearview AI’s inception and how the technology works.
Clearview AI: ‘A Tool for Tracking’
Debuted in 2017 under the original name SmartCheckr, Australian-born college dropout Ton-That published the tool and marketed it as a way to track people across separate social media platforms.
After receiving funding from a Facebook board member, the company changed its name to Clearview and revamped a year later focusing on facial recognition.
It has now grown into a searchable database of more than 3 billion images scraped without permission from places such as Facebook, Instagram, Twitter, Google and LinkedIn.
Each company — including YouTube — has since sent Clearview AI cease-and-desist letters, noting that scraping people’s data violated its terms of service, CBS News details. The companies have declined to comment about whether or not they will take further legal action.
Despite this, Clearview AI maintains that they have amassed one of the largest-known databases of pictures of people’s faces, and that they’re “99 percent accurate” despite never proving this claim, according to CNN.
“If you’ve posted images online, your social media profile picture, vacation snapshots, or family photos may well be part of a facial recognition dragnet that’s been tested or used by law enforcement agencies across the country,” Buzzfeed News details regarding the data scraping.
Transforming into a Law Enforcement Tool
Law enforcement agencies have long toyed and experimented with facial recognition technologies as a way to identify people of interest in surveillance photos by matching it with a driver’s license or passport photos. It’s a recent phenomenon to then extend that reach out into social media scraped images.
To get Clearview AI onto police computers, the company used a sales strategy that’s commonplace — targeting individual employees and offering them free trials, creating a “bottom-up” demand of the officer advocating within their departments to sign up for a paid version, according to the Buzzfeed News report.
After March of 2020, according to emails obtained by the reporters via public records requests, Clearview AI allowed agencies to run a few checks on the free trial of the program. There were a few safeguards added — like a requirement of a supervisor’s approval and an active case number — considering before then, there were no guardrails.
Smaller police departments were among Clearview’s earliest users, like those in Mountain Brook, Alabama, a town with a population of about 20,000. They tested the product by running nearly 590 searches.
Now, the technology has graduated to departments like the Broward County Sheriff’s Office in Florida, which has conducted more than 6,300 searches, and the New York Police Department (NYPD), which has run over 11,000 searches by just 40 individuals.
Many advocates are stunned, considering the New York Police Department appears to have lied about whether it has ever used Clearview AI’s technology. In 2020, the NYPD stated that it had “no institutional relationship” with the surveillance firm, but later confirmed it has worked with the vendor as early as 2018, according to Gizmodo.
In November 2020, the Los Angeles Police Department barred officers from using Clearview AI, as many were using it without permission, The Crime Report detailed.
Following the start of the new year, a Los Angeles panel unanimously approved oversight measures of the facial recognition software, but advocates responded in March by suing Clearview AI themselves, saying it’s the “most dangerous” facial recognition database in the nation.
The tool reaches beyond local law enforcement, Buzzfeed News uncovered, noting that they have a list of more than 20 bureau offices that have run over 5,800 searches as of 2020. In addition, those same records show employees at U.S. Customs and Border Protection having over 270 accounts and 7,500 completed Clearview AI searches, the Buzzfeed News report outlines.
Many advocates and members of the public have been stunned at these revelations, as Senator Chris Coons (D-Del.), the chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law told Buzzfeed News: “As this reporting shows, we have little understanding of who is using this technology or when it is being deployed.”
He concluded, “Without transparency and appropriate guardrails, this technology can pose a real threat to civil rights and civil liberties.”
Of the 1,803 companies reached out to for comment, 1,161 did not respond to questions about whether they had used it — whereas other organizations like the L.A. Police Department, the U.S. Department of Justice, Defense, and State, and the FBI all declined to comment.
Ton-That told Buzzfeed News in a statement that over the course of two years, the company had helped “thousands of agencies” solve crimes including “child exploitation, financial fraud, and murder,” but did not provide specific examples when asked.
“As a young startup, we’re proud of our record of accomplishment and will continue to refine our technology, cybersecurity, and compliance protocols,” he said in his statement.
“We also look forward to working with policymakers on best practices to forge a proper balance between privacy and security that serves the interests of families and communities across America.”
Additional Reading: Facial Recognition: Beware the ‘Long Arm of the Algorithm’