Why Humans Are So Terrified Of Robots With Feelings
The uncanny valley isn't just about appearance, but the mind as well
As the gap between human and artificial intelligence grows smaller, what’s left to distinguish human minds from those of machines? The most straightforward answer — and one backed up by research into how humans think about A.I. — is that computers might someday be able to think like humans, but they can never feel like humans. Emotions are supposed to remain an exclusively human domain, and attempts to change that appear to awaken a fundamental fear we have about beings that act like us, but aren’t us. Can increasingly more sophisticated A.I. survive in the face of deep-seated human anxiety?
Sigmund Freud used the word uncanny a century ago to describe something so strangely familiar that it’s unsettling, even loathsome. The idea resurfaced in connection to robots in the 1970s, when Japanese roboticist Masahiro Mori used the ‘uncanny valley’ to talk about machines’ appearances. An industrial robot in an auto plant, which looks like just a giant mechanical arm, or even something like Honda’s ASIMO, which looks like a cartoon astronaut, may seem unremarkable — but an attempt to more perfectly create a human lookalike can be unnerving, even threatening. This unease seems like the result of cognitive dissonance between a person’s prior understanding of how humans and machines are different and their subsequent interaction with something that blurs those distinctions.
Over the past decade, researchers have begun exploring how efforts to create computers with personalities, shifting the idea of the uncanny valley from outer appearance to the machine’s mind. A pair of researchers at the University of North Carolina, Chapel Hill found in a 2012 study that a computer system that simply appeared to have feelings without displaying any other identifiable human traits was enough to make users uncomfortable. As much as the concept of a thinking, feeling A.I. program is still in its electronic infancy, the idea is already plausible enough for people to take it seriously.
“It probably wouldn’t have made a whole lot of sense for researchers in 1990 to present participants with a supposedly ‘intelligent’ virtual agent,” social psychologist Jan-Phillip Stein told Vocativ by email. “It’s a completely different story in 2017.”
Earlier this year, Stein and his colleague Peter Ohler at Germany’s Chemnitz University of Technology published the results of an experiment in which four groups of 92 total people watched the exact same emotionally charged dialogue between two digitally created avatars. The difference was that the groups were variously told the conversation was controlled either by humans or by A.I. and that the dialogue itself was either pre-scripted or improvised by the participants. The combination in which computers were supposedly holding an emotional, empathetic of their own creation was significantly more unnerving for participants than any other.
So what’s driving this anxiety? Stein sees it as more than just a general unease around the unexpected, but rather something deeper. “My current theory is that a large part of the observed distrust relates to a very basic perception of threat,” he said. “Any cue that makes people doubt their dominance over a technological creation, as well as their own uniqueness in face of it, will drive the entity directly into the uncanny valley of mind.”
Fear of the uncanny valley might be particularly pronounced in the West, where stories of uncannily inhuman creatures like zombies and Frankenstein’s monster have endured for centuries. Research suggests people from East Asian cultures are generally less bothered by the uncanny valley and more accepting of robots and A.I. In their paper, Stein and Ohler suggest this may reflect a centuries-old philosophical divide: Christian-influenced cultures in particular tend to view the human body as a mere vessel for the soul, which is the true marker of a person. Eastern religions like Buddhism and Shintoism don’t see the soul as an inherently human concept, so it may well be culturally easier for people in Japan to deal with an emotional robot than it would for people in the United States or Europe.
Regardless the explanation, billions of people seem to hold a deeply rooted distrust of feeling machines — a serious roadblock for the development of more sophisticated thinking machines. The idea of the uncanny valley predates the modern-day development of artificial intelligence, which may well mean programmers and designers are facing what is ultimately an unbridgeable gap. “I do not believe that certain primal conflicts… will disappear from the human psyche anytime soon,” said Stein. “In my expectation, A.I. designers will continuously have to deal with that.”
Even so, Stein said he wouldn’t want people to take his and Ohler’s research as proof of some kind of unalterable “People hate emotional A.I.” truth. While the idea of a human-looking robot or a fully-realized digital avatar with the capacity to think and feel might long remain a challenge for humans to accept, there are other contexts — like as an educator, in the entertainment or service industries, or fulfilling the destiny of Microsoft Word’s Clippy as a productivity helper — where an emotionally aware program might not be so intimidating. As Stein sees it, much will depend on how clever developers can be in making users feel like they are still in control, even as the A.I. thinks and feels for itself. Giving people an opportunity to get used to those more innocuous examples of emotional A.I. could ease the acceptance of more sophisticated models.
“As has been observed with the traditional ‘uncanny valley’ of appearances, I suppose that we will witness strong habituation effects, as people become more used to refined A.I. — neural networks, big data, and deep learning — technologies in their daily life.” said Stein. “I wouldn’t dare to assume that A.I. is inherently ‘doomed.'”
The new season of DARK NET — an eight-part docuseries developed and produced by Vocativ — airs Thursdays at 10 p.m. ET/PT on SHOWTIME.