Robots

Should Robots Be Punished For Committing Crimes?

One legal scholar argues advanced machines should be considered moral actors — and penalized for their misdeeds

Robots
Illustration: Diana Quach
Apr 03, 2017 at 5:00 PM ET

When a robot hurts a person or damages property, who should take the blame?

With the rise of increasingly sophisticated artificial intelligence systems like deep neural networks, many would say that the blame falls squarely on the robot’s owner or designer. After all, researchers have thoroughly proven that artificial intelligence systems often inherit the biases of their designers, since the statistical models and types of data used to “train” its algorithms dramatically effect how the system makes its determinations.

But at least one legal scholar argues that robots could not only be considered morally responsible for their actions, but could also be held criminally liable for their own actions and “punished” for any harms they might inflict.

That’s the provocative argument raised by Ying Hu, a resident fellow at Yale University’s Information Society Project, who recently presented a draft paper on the topic at the school’s We Robot conference in New Haven, Connecticut.

To make her case, Hu compared robots to corporations, which under U.S. law are considered “persons” whose actions are often detached from the decisions of individual people within the organization. Like corporations, Hu said, a “smart robot” — one that continually learns and adjusts its behavior based on experience — is guided by an internal decision structure that evolves over time. This decision structure could become so complex that it becomes impossible to attribute its actions to any particular design flaw or human decision.

In other words, even a smart robot that has been carefully designed, thoroughly tested, and given proper instructions can still make harmful decisions that could only be attributable to the robot itself. That would mean that the robot is effectively making moral judgements, and that humans could serve as a kind of arbiter identifying bad judgements and negative behavior.

“If they are morally responsible robots, it’s not sufficient to just punish the creator. You have to punish the moral agent itself,” Hu said during a panel discussion at the We Robot conference. “If and when we delegate the power to make moral decisions to robots, I argue there’s a duty on human beings to supervise them. There should be a process for us to evaluate robot reasoning, and if the reasoning is bad, to publicly announce that reasoning.”

By identifying certain forms of robot conduct as forbidden or criminal, a kind of signaling system could be used to send a clear message to other robots and human robot designers alike to avoid those bad decisions. And if it fails to heed the warnings, Hu said, the robot’s “punishment” could include deactivation, reprogramming, or simply being labeled “criminal” — like a kind of robotic Scarlett Letter that warns others to stay away.

Hu warned that questions about criminal liability for robots may need to be answered sooner than we think — especially given that self-driving cars and robot security guards already roam the streets in some US cities. She also isn’t the first to present the idea of robot personhood: Last Summer, the European Union considered a plan that would classify autonomous robots as “electronic persons” with “specific rights and responsibilities.”

Nevertheless, Hu stressed that criminalizing and punishing robots shouldn’t be a way to get their owners and designers off the hook.

“Just because we want to impose criminality on robots doesn’t mean we shouldn’t criminalize some conduct by robot designers and owners,” she said. “Just because something is treated as a legal person doesn’t mean that thing should have the same rights as a human being. What we need to do is figure out what rights robots might have, and what liabilities they might have if they’re treated as electronic persons.”