The Robot in the Courtroom

Print More
robot

Photo by enrico via Flickr

Could the robot driving your car become a witness against you?

The growing sophistication of artificial intelligence (AI) tools makes it likely that such tools will increasingly be used in criminal trials—but they also pose new challenges in determining the reliability of “machine-evidence,” according to a paper published in the Georgetown Journal of International Law.  

Sabine Gless, the author of the paper and a professor at the University of Basel School of Law in Switzerland, argues that the fact-finding required to assess guilt or innocence in a trial is complicated when the source of the evidence is a machine that cannot be cross-examined.

Assumptions that evidence produced by such machines—or “machine evidence” —is objective or neutral may not necessarily be correct, she warns.

“As AI becomes more ubiquitous, and if such technology is deemed to be an accurate assessment of human conduct, more people may be willing to accept it as a reliable and trustworthy source of information,” Gless wrote.

“Despite this possibility, it remains unclear if and how such information would —be admitted into a court of law.”

Gless proposed a “hybrid” approach that borrows from both the “adversarial” legal system used in the U.S. and the “inquisitorial” system used in European countries such as Germany to address the complicated challenges presented by digital data collected from “non-human” sources.

She used the example of a traffic accident involving cars that are jointly controlled by a human and a robot. The car’s computer system is programmed to take control when the motorist consents, while closely monitoring both the car and driver.  It records a driver’s reaction speed, and it can alert her if the vehicle is dangerously close to other vehicles or if she is falling sleep.

“As this technology progresses, humans will increasingly be sharing the wheel with so-called ‘driving assistants,’ or software bots that support the human driver’s performance and assist or even take over driving in specific situations,” she wrote.

“In the case of the latter, it is unclear, however, who will be seen as the driver at any given moment, and this has significant consequences for liability.”

If the motorist fails to heed the robot’s advice and crashes into another car, can she be held liable? Or if the robotic system itself fails, who is to blame?

These questions could confound the traditional way courtrooms present and assess evidence, even as robotic systems become more dependable and prevalent, Gless wrote.

One positive trait of AI testimony, the article argues, is objectiveness, since machines have no emotional stake in the outcome of a case and thus cannot be guilty of perjury.

Nevertheless, Gless warned, AI “is not infallible,” and the evidence it provides needs human input to check for gaps or glitches.

“One must first acknowledge that robots and software bots—i.e., stand-alone machines or programs that interact with users of a consumer product—are different from forensic instruments like breathalyzers, DNA testing kits, or radar speed guns,” Gless wrote.

Cross-examining a Machine

How do you check whether a machine is providing an accurate account?

Despite their ability to collect vast amounts of data, AI-driven devices cannot explain for themselves how they evaluate human conduct or reach a decision. Therefore, law enforcement and the courts must be cautious about what they learn from machine-generated data, Gless wrote.

“AI-driven devices cannot undergo the equivalent of cross-examination even where they are evaluating human users and coming to a conclusion, like whether or not a driver has the capacity to operate a vehicle,” she wrote.

The only possible way to allow machine evidence to be used in the courtroom is by making it possible for attorneys to scrutinize how the machine works, through interrogating its “design, algorithms, and machine learning/training data.”

Proposing what she said were “significant changes” to both systems in anticipation of courts across the world being faced with evidence generated by AI, she argued for a “hybrid” approach that draws from both adversarial and inquisitive legal systems.

It would involve a judge assessing the validity of evidence recorded outside the courtroom, while allowing lawyers for the defense and prosecution to incorporate expert analysis of the data to assess guilt or innocence.

According to Gless, the kinds of machine evidence that could be used in trials was expanding as technology continued to improve. Facial recognition technology, for example, which purports to detect a person’s mental state or predict behavior could someday be introduced in court to establish motivation for a crime.

“Now is the time to prepare for fact-finding in ambient intelligent environments,” she asserted.

“To do so, we first must understand the characteristics of the various types of machine evidence and work with qualified experts to both understand the technology and explain the underlying legal concepts.

“Regardless of whether AI becomes a new tool to convict or acquit, we must ensure trustworthiness in the fact-finding process where machine evidence is used in criminal proceedings. “

Download the complete paper here.

TCR News Intern Sara Rose George contributed to this summary.

Leave a Reply

Your email address will not be published. Required fields are marked *