Robots that display human-like vulnerabilities may get on better with people than ones programmed to be too perfect, researchers suggest.
Experts from the University of Lincoln found that people warmed to a robot if it made mistakes and showed human-like emotions, such as boredom.
For the tests, robots were programmed to make errors, and the reactions of human participants were monitored.
The findings could affect future robot design, said researchers.
Dr John Murray, from the University of Lincoln's computer science department, and PhD researcher Mriganka Biswas conducted experiments with three robots.
The first was Erwin (Emotional Robot With Intelligent Network) which was developed at the University of Lincoln's school of computer science, and can express five basic emotions.
The second was Keepon, a small yellow robot designed to study social development by interacting with children, while the third was a 3D-printed humanoid robot called Marc (Multi-Actuated Robotic Companion).
The researchers introduced a set of what they called "cognitive biases" to the programming which included allowing the robots to make mistakes, wrong assumptions, as well as expressing more human-like emotions such as getting tired, bored or over-excited.
For half of the time interacting with human participants the robots were not affected by these biases.
"We monitored how the participants responded to the robots and overwhelmingly found that they paid attention for longer and actually enjoyed the fact that a robot could make common mistakes, forget facts and express more extreme emotions, just as humans can," said Mr Biswas.
"We have shown that flaws in their 'characters' help humans to understand, relate to and interact with the robots more easily."
Companion robots are increasingly being used to support those who care for elderly people or for children with autism but most human-robot interaction is based on a set of well-ordered and structured rules and behaviours.
"The human perception of robots is often affected by science fiction, however, there is a very real conflict between this perception of superior and distant robots, and the aim of human-robot interaction researchers," said Mr Biswas.
"A companion robot needs to be friendly and have the ability to recognise users' emotions and needs, and act accordingly. Despite this, robots used in previous research have lacked human characteristics so that users cannot relate - how can we interact with something that is more perfect than we are?"
His findings were presented at the International Conference on Intelligent Robots and Systems in Hamburg this month.
"As long as a robot can show imperfections which are similar to those of humans during their interactions, we are confident that long-term human-robot relations can be developed," said Mr Biswas.
Fellow researcher Dr Murray told the BBC that how robots interact socially will be key to future robot research.
"Obviously it depends on the application and, in a factory, we don't want robots that make mistakes," he said.
"But with robot companions, it is not so much the look of the robot but its behaviour. If you want a robot to become friendly and for people not to wary of it, it will need certain traits."