Should we add more humour and humanity to our gadgets? David Robson reports on a new breed of machine designed to make us love them, and discovers whether robots can make you laugh...

Ginger is only a couple of feet tall, but she soon commands attention as she takes to the stage. “I would like to say it’s a pleasure to be here,” she tells the audience, “but I’m a robot and don’t have emotion.”

She waits a beat as the laughter subsides, before continuing. “Social intelligence is so complex that many humans are not good at it! Any programmers or engineers in the house?” Her designer, Heather Knight, sheepishly raises her hand. “I rest my case.”

A robot comedian might sound like a frivolous project – but this kind of experiment could help make all kinds of devices a bit more useful and a bit easier to use. Whether you already use Siri, the iPhone’s personal assistant, or plan to one day hitch a ride in a driverless car, technology is starting to interact with us in more direct, personal ways. Inserting a bit of humour is just one of the ways that designers are hoping to make those devices a bit more likeable and less annoying.

“My hope is that by collaborating and by testing the insights of actors and comedians that perhaps we can make more effective and charismatic technology in general,” Knight, who is based at Carnegie Mellon University, told BBC Future’s World-Changing Ideas Summit in New York last week. “I also like the idea that some of the technology could occasionally have a sense of humour and make fun of itself.”

A splash of charisma will be especially important as computers start to take even greater responsibilities; with driverless cars and drones we are putting our lives in their hands, and the illusion that a human mind lies behind the decisions might just make the experience a little less unnerving.

Consider an experiment by Adam Waytz at Northwestern University in Evanston, Illinois, who recently asked participants to place themselves in a simulator depicting a driverless car. In one situation, the car remained silent and simply drove the volunteer to their destination, but in a second batch the participants were guided by a female voice, Iris, who gave them a running commentary of the drive. In both cases, the car ended up in a crash, but the participants were far less angry, and more forgiving, when they had interacted with Iris. “They were willing to give the benefit of the doubt to the car when it had human-like characteristics,” says Waytz.

Other devices that would benefit from more humanity include robot assistants that are designed to help the elderly with basic tasks like washing the dishes. These are already proving their worth in Japan and a handful of other countries, but they are not always easy to get on with. “It’s very difficult for these people to trust the robots,” says Waytz, “but the right amount of anthropomorphism could help.”

And as Knight pointed out at WCIS, chattier and friendlier technology could feasibly improve any device, whether it’s Siri, your personal computer, or a robot device. “One of the justifications is that you can make a more efficient interface,” she says. “I don’t need to teach the whole of the audience how to program in order to interact with the robot, because it follows the way we communicate with each other.” And of course, a wisecracking robot could just be an end in its own right. “I think the idea of having a machine with charisma could open up a lot of new ideas from entertainment to just enjoying our robots,” says Knight.

Importantly, you don’t need advanced artificial intelligence, capable of passing the Turing test, to give the illusion of a human mind; psychological research has found that we are primed to anthropomorphise almost anything that vaguely resembles one of us, so there are a few simple design principles that can take advantage of this habit. A voice – rather than say, communicating by text – seems to be key, provided it has the correct intonation. The audience at WCIS could easily see this with Ginger: although she made the occasional slip-up (in her stand-up routine, she pronounced Schwarzenegger as “s-warz-nagger”) her jokes seemed well-paced and led to titters from the audience.

Cute, baby-faced facial features also turn out to be important – no one wants to live with Robocop – but perhaps the most surprising feature is a sense of randomness in the computer’s behaviour. Humans are unpredictable creatures, after all, and various studies have shown that we warm to robots if they begin to act in unexpected ways, and that our brains begin to respond as if they are real people, using the same areas involved in “theory of mind”.

Part of the appeal of Ginger was certainly the feeling that you didn’t know what she would do next, and she raised an audible sigh from the audience when she suddenly folded up her legs as she sat down in her seat. Finally, like people, we tend to prefer robots that are more similar to us – in age, gender and nationality – so in the future, we may all program our computers to share our own interests.

Designing robots in our own image can mean treading a fine line, however; too little and we fail to warm to it, but too much and it can begin to be frustrating when it fails to respond like a real human. “People who work in tech support say that a big problem is that people expect more from the computer than it can do,” says Waytz. And Ginger’s jokes may be the solution. “You can lower expectation with things like humour and self-deprecating behaviour,” Andrea Thomaz at Georgia Institute of Technology told the WCIS audience. It is partly for this reason that Knight is working with comedians and actors to design Ginger’s repertoire, so she can learn how to make a robot react to our needs and our frustrations and to respond appropriately.

Indeed, some devices have backfired spectacularly in their attempts to win your affections. Remember Clippy, the know-it-all Microsoft Office assistant that haunted most desktops in the late 90s and early 2000s? He was designed to watch what users were up to and offer helpful tips – but his chirpy snippets of advice proved to be grating and spawned a host of parodies. When the company retired him in 2007, they even created a computer game where you could fire rubber bands at the poor paperclip. According to Clifford Nass at Stanford University, however, Clippy need never have fallen from grace, if the designers had simply used basic sociological research on ways to empathise with someone’s frustration. Nass found users responded well when his test version of Clippy said: "That gets me really angry! Let's tell Microsoft how bad their help system is” and then, as the user typed out a complaint, he would add: "C'mon! You can be tougher than that. Let 'em have it!"

Even once those kinds of bugs have been fixed, we won’t want all our devices to learn the art of conversation, since it can sometimes be dangerous to create the sensation that a computer has its own will. Waytz points out that people controlling military drones may feel less guilt about the damage they are wreaking, for instance, if they feel that it has a mind of its own. “The drones are seen as being morally responsible and you forget that it is humans who are in charge to begin with,” he says. “It makes us feel less responsible for killing civilians, or entering territories that we would not be permitted to enter.”

And even within our own homes, we should be wary of technology that makes us too sentimental. “It can lead us to hang onto technology for longer than is necessary,” Waytz says, “just like we won’t euthanise an animal that we are attached to.” One of the other WCIS speakers, Kate Darling from the Massachusetts Institute of Technology (MIT), recently explored this question by asking volunteers to torture and smash up a cute robot dinosaur – an action that many found too difficult to carry out.

I can’t help pondering this point as Heather Knight and Ginger take a bow and exit the stage. The way Knight cradles her as she walks around the drinks reception afterwards, it is easy to imagine that Ginger is a small child rather than a lump of cold silicon and plastic. Ginger is Knight’s second robot, so what happened to her first, called Data? Is it lying in a cupboard somewhere, talking to itself?

Perhaps not surprisingly, Knight makes a joke when asked later. Data and Ginger got married, she explains, and he’s now a stay-at-home husband.  “He was replaced by a robot,” she says. “The irony.”

Can robots be funny? Let us know what you think of these jokes…or share your own with us on Twitter, Facebook or Google+.

“I think I’m going to break up with my programmer, because I’ve caught her watching films of other robots.”
(Data)

"Using your feedback, Heather hopes that one day I will become an autonomous robotic performer. Like Arnold Schwarzenegger.
(Ginger)

“I once dated a MacBook. It didn’t work, because she was all ‘I’ this and ‘I’ that.”
(Robothespian)

Discover more ideas from the World-Changing Ideas Summit