Mick Walters opens a door in his lab and points his computer’s camera towards the small, blurry, tan-coloured object he has just revealed. "This is Kaspar Two," he says. As the Skype connection catches up, an image of a robot in a baseball hat, a blue button-down shirt and striped socks appears. Kaspar Two is a robot child. He's not even on, just sitting slumped over. Even though the image is somewhat fuzzy, Kaspar Two is able to give me that feeling, that nagging sense of unease. "I must admit," says Walters, "when I first actually built Kaspar, I did think he was a bit uncanny."
Kaspar has been created at University of Hertfordshire, UK to help children with autism understand how to read emotions and engage with other people, but it falls into what's often called “the uncanny valley”. From humanoid robot heads to super-realistic prosthetic hands, the uncanny valley is where robots that give us the creeps live. It is the range between obvious cartoons and discernibly real people, where things look almost lifelike, and yet not quite believable. Peering into the uncanny valley is an uncomfortable experience. Its residents, like Kaspar, have a way of eliciting feelings of disgust, fear or dread.
For almost 30 years, the concept of the uncanny valley has acted as a golden rule for roboticists and animators. From Pixar to puppets, creating characters that are too lifelike was thought to be the kiss of death for any project. But now the concept itself is coming under scrutiny like never before. What exactly we are feeling and why we feel this way are questions that have finally found their way under the microscope. And some researchers are asking whether the uncanny valley exists at all.
What's in a name?
The first time many people encountered the concept of the uncanny valley was in 2001 with the movie Final Fantasy: The Spirits Within. Today, it is known as one of the first photorealistic computer animated films, but at the time not everyone was impressed. The groundbreaking graphics made many movie-goers uncomfortable, and the film flopped, losing Columbia Pictures $52 million. The faces were too human, too close to real life. "At first it's fun to watch the characters," film critic Peter Travers wrote in Rolling Stone. "But then you notice a coldness in the eyes, a mechanical quality in the movements."
A link between what is almost human and what is creepy was proposed long before Final Fantasy, however. The phrase “uncanny valley” is widely accepted to have originated in 1970, with the publication of an academic paper by roboticist Masahiro Mori in an obscure journal called Energy. Mori's original paper was in Japanese. Contrary to popular belief, his original title “Bukimi No Tani” only roughly translates into the phrase it has made famous. A more accurate translation is “valley of eeriness”.
This matters because it demonstrates the problem with the uncanny valley: it is an inherently woolly idea. When researchers try to study the phenomenon, they often have a hard time pinning down what an uncanny response actually looks like. The main graph in Mori’s paper has been mistranslated many times, leaving many people unsure what he really meant. Mori used the Japanese word “shinwakan” on the y-axis, a word that has no direct translation into English. The most common interpretation is “likeability”, but not all translators agree about that. Other suggestions include “familiarity”, “affinity”, and “comfort level”.
Perhaps the most surprising thing about the concept’s history, though, isn't the translation troubles, nor the debate over what is being represented on his graph, but how long it took for that debate to arise. Mori's paper didn't include any measurements. It was more an essay than a study. Yet, despite broad dissemination, the uncanny valley avoided scientific scrutiny until the early 2000's, when graphics and animatronics like Final Fantasy started giving people the creeps. As scientists started to explore Mori’s graph, they began to ask whether real data would reveal the same pattern.
Spot of bother
A few studies have asserted that the whole thing doesn’t exist. In one study, David Hanson of Hanson Robotics, in Plano, Texas, and his colleagues showed participants images of two different robots that were animated to simulate human-like facial expressions. The survey simply asked the participants what they thought of the experience. The vast majority (73%) liked the human-like robots. In fact, not one person stated that these robots disturbed them.
Hanson and his team then showed the participants a continuum of images, starting with a picture of Princess Jasmine taken from the Disney movie Aladdin. Over the course of six images, Jasmine’s face slowly morphed into that of actress Jennifer Love Hewitt. The idea of these facial progression studies is to try to observe the dip in likeability that Mori predicted between an obviously cartoon image and an obviously human one. The participants were asked to rank the acceptability of each picture in the series. But, again, rather than see a dip in the scores in the middle of the range – as the uncanny valley would predict – none of the images seemed to bother anyone.
Why this happened isn’t clear, and not everyone thinks Hanson’s experiment is robust. Many other studies have shown the opposite. For example, Edward Schneider’s lab at SUNY Potsdam in New York collected 75 existing characters from video games and animation, including Hello Kitty, Mickey Mouse, Snoopy and Lara Croft. They asked participants how human and how attractive (or repulsive) they perceived each character to be. In this case, the researchers did find a dip in likeability in the middle of the series, roughly where the ogres from World of Warcraft sit.
Moreover, a team lead by Karl MacDorman at Indiana University conducted an experiment similar to Hanson’s, using a progression of images in which a robot face slowly morphs into a human one. They, too, found a U-shaped dip in likeability in the middle of their 11-image series.
However, among the labs that have observed the uncanny valley, there is strong debate about its shape. Christoph Bartneck, a robotics researcher at Canterbury University in New Zealand says that, based on his studies, a valley might be the wrong geological metaphor altogether. "As far as we can tell,” he says, "it looks more like a cliff." Essentially, he says, at the point where robots achieve extreme human-likeness, but remain discernibly un-human, their likeability plummets. And people only start to like them again when they become so human-like that they escape detection.
To make things even more complicated, there’s nothing that proves the uncanny valley reflects gradations of the same reaction. It could be a handful of reactions to different aspects of having a varying degree of human-likeness. When MacDorman showed his subjects videos of many different robots, the responses followed no clear pattern. "The results do not indicate a single uncanny valley for a particular range of human-likeness," MacDorman’s wrote in the paper. "Rather, they suggest that human-likeness is only one of perhaps many factors influencing the extent to which a robot is perceived as being strange, familiar, or eerie."
Questions about what reaction (or reactions) cause the uncanny valley (or, indeed, the uncanny cliff) quickly lead to other questions about why we react at all.
There are a few explanations that might account for our strange aversion to humanoid robots. One is that not being able to tell whether something is human or not can be a deeply unsettling feeling in itself. Artists and directors take advantage of this all the time for dramatic effect. The dread that viewers feel while trying to figure out who is a zombie, or Cylon, or alien might be the very same dread they feel when faced with a very realistic robot.
Another explanation focuses on the disconnect between how realistic something looks, and how well it moves. There's always been a lag time between how quickly designers can make things look like people, and how quickly engineers can make them move like us. If a figure that you thought was human started to move jerkily, you would recoil. Similarly, if you were to shake a robot’s hand while expecting a human touch, but instead felt cold rubber, you would be caught of guard. An unexpected break in humanness can be an unpleasant shock, one that sets off fearful and distrustful instincts. "Whenever we see something move, and we're not familiar with the mechanism of movement, it grabs our attention," says Andrew Olney, a psychologist at the University of Memphis who works on designing intelligent robots. "If your coffee cup started slowly moving across the table, that would kind of freak you out a bit."
Finally, a third theory turns to evolution. It suggests that if a robot looks like a human, but moves unnaturally, our brains subconsciously classify what we're seeing as someone with a disease. This is the same explanation proposed for most feelings of disgust. When we stand near something like faeces, rotting flesh, or a jerking robot, we experience a sudden urge to get away from it so as to avoid catching the infections it may harbour. Some preliminary research in rhesus monkeys suggests that these animals share an uncanny valley-like response, indicating that they have perhaps adapted to the same evolutionary pressure in the same way as us.
We are, of course, becoming more accustomed to robots and avatars in everyday life. Between games like Last of Us and movies like Avatar, we see computer-generated images of people all the time. Mori's original examples of uncanny objects, like a wooden prosthetic hand, probably wouldn't raise an eyebrow today because they are so obviously fake. Final Fantasy no longer triggers unsettling feelings among younger viewers, who are used to games like Crysis and Witcher 2. The shift in expectations has been going on all along – and might well continue until technology is good enough to fool us.
This trend sets up a roboticist’s ultimate challenge: to be the first person to build a robot that is indistinguishably human to other humans. It is a challenge that Hiroshi Ishiguro, one of the world’s leading humanoid roboticists, believes he will one day meet. Some of his humanoid robots already interact with people, and some robot designers treat them as if they were human. When one roboticist named Peter Kahn visited Karl MacDorman’s human-computer interaction lab at Indiana University and wanted to take apart Ishiguro’s Repliee Q1Expo, a petite Japanese humanoid-woman in a pink blazer, he first turned to his wife and asked, “May I touch her?”
But not everyone is convinced that we'll engineer our way out of the uncanny valley, or that it is a good thing if we do. While what makes us uncomfortable is likely to shift, the presence of discomfort won't, they argue. Potentially, it could get worse. "You can imagine cases, for example, maybe 50 years down the road, someone might be in a relationship with an android and not know it," says MacDorman. “But if there were an accident and some of the mechanical underpinnings are exposed, that would be uncanny. It would be uncanny in a different way.”
If you would like to comment on this article or anything else you have seen on Future, head over to our Facebook page or message us on Twitter.