Robots are like mechanical butlers, right?
Not anymore. Some machines are programmed to simply respond to our whims, like the excruciatingly named “Botler” roaming the halls of a Silicon Valley hotel, but these days more attention is focused on robots which we can talk to, socialise with and live alongside. Robots that understand us emotionally could look after us when we’re old, for instance. Europe is expected to reach a 2:1 ratio of workers to pensioners by 2060 which is partly why many are looking to robots to help an ageing population cope.
What’s more, commercial social robots like Jibo are being marketed as “one of the family” and offer to make dealing with household tasks and personal goals more pleasurable. Jibo is designed to lie somewhere between your butler and a sibling. Its creator, social roboticist Cynthia Breazeal, says this is crucial. “What we’ve found in many cases is that people do better with social robots than they do with flat-screen devices. They learn better, for example, or they’re more successful with their weight management programme,” she explains.
Social robots may even find a role in institutional settings, helping prisoners “re-socialise” themselves during rehabilitation schemes. This has been achieved for years with prisoner pet partnership programs, but robots might be even better at the job.
OK, so they want to be our friends. They’re still pretty dumb, though.
Compared with human intelligence, yes, robots remain limited. But social robot designers have learned how to compensate for that. Some of that comes from including toy-inspired attributes which appeal to our emotions.
Jibo, for example, has been likened to the anthropomorphised Pixar lamp and Breazeal confirms that some design cues were taken from Disney animation, which is characteristically very good at capturing emotional meaning through motion.
Aside from all of that, robots have simply got better at managing our expectations. If you say something that Jibo doesn’t understand for instance, it won’t bark back at you an order to repeat your request, but express via a series of non-verbal beeps and boops that it’s having trouble. Besides, there are many applications for social robots which don’t require a fully-fledged AI anyway. Consider the “Moti”, a whirring robot ball which has been used to entertain and engage autistic children.
So everything’s going to be great, then?
Well, maybe… But there might be some issues with welcoming an army of social robots into our world. For example, while a few robots have been designed specifically to help us explore and deepen emotional responses like empathy, researchers like Kate Darling at the MIT Media Lab worry that if other robots aren’t good at provoking that kind of reaction, they might actually achieve the opposite effect and encourage violent or unfeeling behaviour.
“If you’re mistreating something or exhibiting certain types of sexual behaviour towards objects that behave in a very lifelike way like social robots,” she says, “perhaps that could desensitise people to that kind of conduct in other contexts with living things or people.” Robots could even be used by some as an opportunity to explore deviant behaviour.
Right… but at least social robots themselves won’t be malicious, will they?
Actually, they might be. In a paper, computer scientist Matthias Scheutz at Tufts University points out that the efficacy of social robots could one day be turned against us. “If it turns out that humans are reliably more truthful with robots than they are with other humans, it will only be a matter of time before robots will interrogate humans,” he comments.
And Darling, who sees many benefits in having social relationships with robots, also points out that robots might lure us into a false sense of privacy when in fact they’re recording our every word. “What if your grandma has a robot she chats to all the time, giving it all this information that she would never enter into a database?” she asks.
And then there’s the possibility of social robots being hacked. A few years ago, a group of researchers found a host of vulnerabilities in some common robots. “With current and future household robots, third parties can have eyes, ears, and ‘hands’ in your home,” they wrote.
I for one welcome our new robot overlords. Er, not.
Don’t necessarily let all that put you off. Research into these issues has only just begun and individuals like Scheutz and Darling have provoked healthy debate already by suggesting things like changes to the law or drawing up a “robot bill of rights”. In addition, robot creators like Breazeal point out that a lot of these issues are far from exclusive to social robots. Security problems plague all kinds of networked devices and social or familial tensions might be attached to any item one introduces to the home – such as a games console that ends up being fought over by the kids.
It’s arguably more important that we simply learn to anticipate some of these problems rather than let them stifle innovation because, in the long-run, the potential benefits of social robots and machine companions may far outweigh the negatives.
Kate Darling and other roboticists will be talking about the future of robotics at our World-Changing Ideas Summit in New York on 21 October. BBC Future will be covering the event in full – so watch this space.