When Robots Commit Crime, Who Should Be Charged?

Print More

Once the stuff of science fiction, cars driven by robots could soon become a reality—but what happens when one of these cars is involved in an accident?

Once the stuff of science fiction, cars driven by robots could soon become a reality—but what happens when one of these cars is involved in an accident? In a study titled, “If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability,” forthcoming in New Criminal Law Review, researchers consider whether individuals who program and operate the robots should bear the responsibility for injuries, damages or loss of human life caused by the machines.

“If robots cannot be punished, under what conditions should humans be held criminally responsible for producing, programming, or using intelligent machines that cause harm?“ write Sabine Gless, Emily Silverman and Thomas Weigend. “Take, for example, a self-driving car that runs over a small child because its environment-scanning sensors misinterpreted its surroundings and failed to identify the child as a human being.”

The authors note that while private (civil) law allows “entities other than natural persons” to be held liable for damages, “acts” of non-human agents are difficult to accommodate in criminal law.  And since there is no international criminal law governing robots, countries must come up with solutions based on their own general rules and principles.

The authors conclude that the humans who program the self-driving cars should not be held responsible for the harm caused by the robot, unless it can be shown to have occurred as a result of negligence or a foreseeable error. They argue that the benefits of self-driving cars—particularly to the elderly and disabled people—and the fact that replacing human cars with self-driving cars may reduce the overall number of accidents, outweigh the potential drawbacks.

“If society embraces the convenience, the opportunities, and the safety assurances associated with self-driving cars, it should also be willing to accept the fact that (possibly very rare cases of) unexpected actions of robots will lead to (generally) foreseeable harm to random victims,” the authors conclude.

But the authors also note that technology may improve in the future to the extent that “robots may become so similar to human beings that they, like us, will be able to ‘feel’ the effects of criminal punishment.”   If that ever happens, they write, “it might well make sense to consider punishing robots.”

Read the study HERE.

Leave a Reply

Your email address will not be published. Required fields are marked *