Alan Wagner, a social robots researcher at Georgia Tech Research Institute in Atlanta, Georgia ran a study where he simulated a fire in a building and asked people to follow a robot to safety. The robot, though, took them into wrong rooms, to a back door instead of the correct door, and (by design) it broke down in the middle of the emergency exit.
Yet, through all of that, people still followed the robot around the building hoping it would lead them outside. This study proved to Wagner that people have an “automation bias”, or a tendency to believe an automated system even when they shouldn’t.
“People think the system knows better than they do,” Wagner said. Why? Because robots have been presented as all-knowing. Previous interactions with automated systems have also worked properly, so we assume that every system will do the right thing.
As well, since robots don’t react or judge what someone says, our own biases get projected onto these automated beings and we assume they’re rooting for us no matter what, he said.
However, Wagner says it’s important to remember that someone – a mutual fund company, an advisor – is controlling the bot in the background and they want to achieve certain outcomes. That doesn’t mean people shouldn’t be truthful with a robot, but these systems are fallible.
“You have to be able to say that right now I shouldn’t trust you, but that’s extremely difficult,” Wagner said.
To comment on this story or anything else you have seen on BBC Capital, please head over to our Facebook page or message us on Twitter.