The San Francisco Board of Supervisors voted on Thursday to reverse a decision they made last week allowing police to use robots to administer deadly force, following national pushback from the public.
The original 8–3 vote authorizing the robots would have allowed police to kill criminal suspects with remotely operated robots in response to situations where they believe there is an imminent threat of death to officers or members of the public.
The initiative was supposed to start with teams arming bomb disposal robots with bombs that can be remotely activated, like the makeshift fatal-force robot that was used following a 2016 mass shooting in Dallas, Texas.
Following the vote last week, which was prompted by a new California state law that requires police to get city approval for the use of military-grade equipment, dozens of demonstrators, including civil rights groups and city supervisors, gathered outside City Hall Monday to protest the board’s decision.
The Electronic Frontier Foundation also released a letter on Monday signed by 44 community organizations opposing the San Francisco Board’s decision on green lighting lethal robots and calling on the board to take opposition to the plan seriously.
“SFPD’s proposal, if approved, threatens the privacy and safety of city residents and visitors,” the EFF letter argued.
Supervisors like Dean Preston, who voted against the lethal robot authorization originally, felt the public did not have enough time to weigh in on the issue.
“The people of San Francisco have spoken loud and clear: There is no place for killer police robots in our city,” Preston said in a press release.
Gordon Marr, a supervisor who originally voted in favor of the lethal robots, reversed his decision in the second vote. He shared on Monday that he regretted his original vote in a series of tweets on the subject.
“I’ve grown increasingly uncomfortable with our vote & the precedent it sets for other cities without as strong a commitment to police accountability,” Marr wrote. “I do not think making state violence more remote, distanced, & less human is a step forward.”
The decision not to authorize the deadly-force robots may not be permanent; the San Francisco Chronicle reported after the vote that the issue has been sent back to a committee for “further discussion.”
3 Comments
I do not know the current policy in the NYPD and I am not speaking on their behalf. I am not an attorney. I was the head of the Hostage Negotiation Team in the early 1980’s when the Department acquired 3 robots, 2 for the Bomb Squad and 1 for the Emergency Service Unit. Early on we had discussions with the PD Legal Bureau concerning use of force issues. The Deputy Commissioner Legal Matters stated unequivocally that use of force utilizing the robot must meet the requirements set forth in Article 35 of the NYPL (Justification for the Use of Force). Very straight forward. I suggest that the San Francisco government consider such a straight forward policy and allow the Police to protect the public and themselves with tools that can maximize safety and control.
I would respectfully disagree with your position, Mr. Louden. Police agencies are public safety agencies and as such are expected to behave within certain guidelines. You can see from the LE statement, “robots in response to situations where they believe there is an imminent threat of death to officers or members of the public” that this is an overreach of “opinion” of the LE on whether or not to use this tool. I fully realize LE members must make this decision many times, however, use of force is normally circumscribed far more narrowly than the above statement. Taking the Dallas TX incident as an example, I for one have never been comfortable with what happened. The police had the suspect cornered and there was no place for him to go. They sent in a robot with a bomb and killed him. In my opinion this force turned from a public safety agency into an execution squad. Judge, jury and executioner without any semblance of a conviction. What should have happened is LE should have simply waited him out. Too often we see LE move in and kill when it’s not actually necessary to do so. This puts not only LE officers at risk but the public as well.
I would respectfully disagree with your position, Mr. Duncan; I am a citizen with zero law enforcement background and no relatives in law enforcement.
I see no difference between a human LEO, trained to protect lives, employing deadly force to protect other human lives and a “robot” controlled by human handlers employing deadly force to protect human lives.
I do not see any need to place a human at deadly risk when a fully armored non-sentient piece of machinery can employed to effect the same outcome.