One day in April 2010, a man in his mid-30s happened to wander into the Media Lab at MIT in Boston and encountered a friend of mine. After chatting for a while and being polite, my friend asked him: “What do you do here?” The man explained that he was a runner from the Boston marathon who was stuck in the city because of a volcanic eruption in Iceland. Then something extraordinary happened.
The man lay down on the floor and continued to tell my friend his troubles and his story. He had a big trip planned in Europe starting in Munich, he said. However, the volcano which had been spewing rocks across the Atlantic for several weeks had, in his words, “put the kibosh” on that. He was honest and open in a way that surprised me.
Here is a picture my friend took of the man:
And here is a picture of my friend:
The man had been conversing with a small robot I had built, called Boxie, equipped with a camera, and the ability to ask questions of the people it meets. Even though I had long realised that Boxie was capable of inspiring an emotional response, it was astounding that it could elicit intimate details from strangers so easily.
Naturally, I asked myself: “Why would that man open up so readily to this relatively simple object?” To keep perspective, here was a man, laying on the floor in a place where he had never been before, talking to a cardboard box with a face. While Boxie had indeed been designed to be cute, friendly, and personable with a childlike voice, surely the man understood that it was not capable of actually listening.
I built robots capable of eliciting unexpected responses, and sent them out to meet people – everybody from passers-by to astronauts
This experience made me wonder about a broader question concerning our future with artificial intelligence: how will we interact with social robots as they become more common? Since then, I have built various robots capable of eliciting unexpected responses, and sent them out to meet people – everybody from passers-by to astronauts. And what I have found so far suggests that artificial beings will have the capacity to influence our behaviour in ways we don’t yet realise.
For a few years after Boxie, I worked on distilling the aspects of the robot which made people want to talk to and open up to it. These included making it smaller and cuter, using a child voice and improving the questions and interaction. Like Boxie, each bot also had a camera inside its head to film people’s answers.
I partnered up with filmmaker and artist Brent Hoff, and sent the robots, which we called BlabDroids, out to parks, public spaces and film festivals internationally, such as IDFA and Tribeca, to interview people in different places and cultures. The idea was to create the first documentary filmed by robots, and over the past few years, they have visited the people of the USA, the Netherlands, China, Sweden, Switzerland, Canada, the UK and others.
When the robot interviewed the astronaut Chris Hadfield, his response was unexpected
It didn’t take long for us to discover that people were engaging with the robots on a level I never would have expected. Interaction time rose from an average of about eight minutes with Boxie to an average of about 30 minutes with these new robots. Furthermore, what people were revealing to the BlabDroids included very personal stories and things that you would not normally tell a stranger. (It is worth noting that the robot told its interviewees that it was recording them for a documentary to be shown later in the festival, so there was not deception to get the responses.)
Here are a few of the conversations the robot had:
BlabDroid: “What is the worst thing you have ever done to someone?”
Person 1: “Not telling my dad I loved him before he died.”
Person 2: "The worst thing I ever did was, um, made it so that my mother had to drown some kittens one time and I didn't realise until after that was over that it was a very difficult thing for her to do and I've never... I've never forgiven myself for making her drown some little kittens, but we couldn't keep them and I should have come up with some other way."
BlabDroid: “What is something you have never told a stranger before?”
Person: “When I was a kid I didn’t like to pee in public bathrooms. So I would hold my pee until I got home. This one time I was on my bike and I could not hold it, so I peed and left a trail behind me.”
BlabDroid: “If you could tell someone not to make the same mistake you did, what mistake would that be?”
Person 1: “Having kids.”
And here is the BlabDroid interviewing the astronaut Chris Hadfield. As you’ll see, he responds in a different way to how you might expect him to talk with a human adult:
The frank nature of the replies elicited by BlabDroid made me realise just how powerful social robots could be. Not only were people fully trusting them, they were connecting on a social level which allowed for a high level of comfort. They had entered a space in people’s minds usually reserved for other people they trusted – and this was something significant.
You don't have one of these robots with you right now, but let's do a little experiment with a completely different kind of machine, a “robotic sculpture” featuring two simple balloons. Watch the video below and take note of what attributes you give to the balloons (share your answers on BBC Future’s Facebook page or Twitter if you wish).
What did you think the balloons were doing? Some responses I heard, while being a fly on the wall in the gallery where they were displayed, included everything from “Well, they're fighting,” to “Oh, this is obviously about domestic violence.” The overwhelming response was that they were in conflict. The interesting thing here is not necessarily the violent aspect, but the fact people kept saying “they” or “them” not “it”.
Psychologists know that we are quick to anthropomorphise inanimate objects. This was most famously depicted in an experiment in the 1940s, conducted by Fritz Heider and Marianne Simmel. They asked people to watch a film of simple shapes interacting, and found that people gave them surprisingly human-like qualities, describing a triangle as “mean” and “bullying”, while the other shapes were “afraid” or “shy”.
Another machine-based art project I designed with Alicia Eggert in 2012 also provoked people to anthropomorphise – but this time, it made some of them feel uncomfortable.
We called it the pulse machine. A kick drum was fitted with a robotic striker which was attached to a countdown mechanism. The machine was “born” during the opening and was given the lifespan a person who was born on that day would have (about 78 years). The machine started its pulse and slowly counted down to its demise. With only this simple setup, people had very different yet extremely personal reactions to it. Some felt that it was morbid and were visually upset by it. Others felt it gave them some energy to go out and do something; it reminded them that life was short. One person said that it had made them sad, as it reminded them of the recent passing of their loved one.
Our propensity to form bonds with machines is already becoming apparent
I found it quite profound that these strong emotions occurred from just a drum and countdown clock. To those who viewed it, the machine was born, is alive, and is now dying. This really drove home the fact that very little was required for people to anthropomorphise an object. It also showed just how empathetic we are as people and that we can empathise with things which are not us.
Indeed, our propensity to form bonds with machines is already becoming apparent as they become a bigger part of our lives: some soldiers are known to mourn their bomb-disposal robots, and owners of Aibo dogs in Japan have staged funerals for their robotic pet dogs. If a machine can become the embodiment of a living thing, the effect of it “dying” can lead to a very real mourning.
All of which raises some difficult ethical questions as we build robots that are smarter and better able to converse with us. How personal and “real” do we want these robots to be? At what point does a robot designed to trigger our emotions become manipulative? Where does one draw the line?
One of the possibilities this opens up is automating aspects of our emotional lives where we usually depend on other people for sympathy
As the philosopher John Campbell told me: “One of the possibilities this opens up is automating aspects of our emotional lives where we usually depend on other people for sympathy and support. Rather than relying on your partner to listen to the problems you’ve had at work all day, why not explain them to a sympathetic robot that makes eye contact with you, listens with apparent interest, making all the right noises, remembers and cross-indexes everything you say?”
Once we accept a machine is alive, any relationship we form with it will be on the same level as any other living thing. Thus, robots are truly alive in our minds; which is perhaps more significant to the future of human-machine relationships than any Turing test. A robot doesn’t need to convince us it is human – we’re ready to believe it already.
This article is based on a forthcoming book by artist and engineer Alexander Reben. The BlabDroids and other robots will be on show at the Charlie James Gallery in Los Angeles from 15-29 August.
Follow us on Facebook, Twitter, Google+ and LinkedIn.