Being comfortable in robotics' uncanny valley
The robot's eyes flick towards me, and its head turns, eyebrows raised, lips forming a smile, as if we are about to meet and start a conversation.
In the Edinburgh Centre for Robotics, I am a little disconcerted by my first encounter with an intelligent machine.
The robot's head is attached to a box rather than a body and looks more like something from a mannequin than a human.
The rational part of me knows all too well that this is simply a bundle of plastic and silicon.
"Hello, do you want to play a game?" it asks.
The facial features - the agility of the eyes, cheeks and mouth - are projected on to the inside of a translucent skull, and the movements seem to work very fluently.
And something about the machine's gaze - along with dozens of tiny adjustments of its face - is clearly connecting with some element of my subconscious because I am far more convinced than I had thought possible. And very slightly unnerved, too.
What this means, in the brave new world of robotics, is that I have entered what scientists call the "uncanny valley", a state of uncertainty and discomfort about the nature of the entity I am talking to.
As the technology and software come closer to achieving some kind of human mimicry, understanding this strange mental zone is crucial to making robots more acceptable.
And that is something advocates of robotics are determined to do - partly because huge new markets for robots have opened up and partly because there are all kinds of tasks that machines can do more safely than us.
The most obvious examples are robots that can be used as explorers of space or the deep ocean, or as scouts or even workers in the most radioactive parts of nuclear power stations.
But we are now on the brink of a new era as machines are developed to enter more sensitive areas of our lives - everything from hospital surgeries to our homes to care for the elderly.
One of the directors of the Edinburgh centre, Prof David Lane, says there is now "a global race to be the country that develops the best smart robots".
In the 1970s, Germany, Japan and South Korea quickly dominated the market for robots designed to work in factories.
"We want to be at the front of the race if we can," he says. "We don't want to lose out the way we did with the first generation of industrial robots."
So how we interact with the new machines - and whether we like them - has become one of the most critical issues.
And for many years, robots have played such a well-defined role in science fiction that we have come to assume that they should look and behave in certain ways, and to take for granted that they are fictitious.
In the Star Wars films, robots were friendly and comical. In the thriller Blade Runner, they were powerful and malevolent.
So, on meeting a robot, our minds struggle to process a combination of long-held expectations and startling new impressions.
Among those exploring the fascinating twilight realm of the interface between human and machine is Prof Ruth Aylett of Heriot-Watt University, which is part of the Edinburgh robotics centre along with Edinburgh University.
"You might think people would look at a robot and say, 'that's a lump of metal'. But research has shown that if it's responsive and has expressive behaviour, people will treat it as what we call a 'social actor' - in other words, as if it was a person, much the way we look at things in cartoons.
"We know that they are not real but we treat them as if they are real characters, as if they really had dreams and hopes like us - in other words, we suspend our disbelief.
"People do that with robots that have good interactive behaviour - we can't help ourselves."
Overcoming the uncanny valley might seem impossible but it has already been achieved with one kind of robot, specifically designed to seem friendly and to serve as a tool to help dementia patients.
Known as Paro, this Japanese machine is modelled on a baby seal with adorably large eyes, soft white fur, a plaintive cry for attention and enough intelligence to be responsive to voices and touching.
In a country like Japan, with an ageing population increasingly at risk of being left alone, the idea of a robotic companion has a strong appeal.
Some 3,500 Paro seals have been sold around the world, with several of these in use in the UK - and the experience seems to be generally positive.
Ron Abbott, an 85-year-old dementia sufferer, was among the first to encounter the robot in Britain, and a charming video shows him interacting with it, and laughing out loud as it responds.
I held one of the Paro robots myself - it was surprisingly soft and heavy - and could not help warming to the way the robot flapped its fins, opened its eyes and uttered little cries to hold my gaze.
To me, it obviously wasn't real but it did feel more alive than a toy.
For Claire Jepson, who helps to manage the robots at the Grenoside Grange Hospital in Sheffield, nearly all her patients find some benefit in spending time with them, possibly because they are drawn to the machines' apparent vulnerability.
"Our patients experience a lot of distress when they come to us. Paro provides comfort, people begin to focus on this thing; the tactile stimulation calms them.
"They seem to be reassured by offering the seal reassurance; it's crying out, it's wanting to be looked after, as a lot of patients do."
She says the robots are an addition to patient care, not a substitute for it, but some critics have questioned whether the arrival of automated assistants will lead to a cut in staff numbers.
The advent of robots in care homes has also triggered a debate about the ethics of using machines for something as sensitive as supporting the elderly.
Prof Noel Sharkey of the University of Sheffield, who has led studies into robotics, declares himself to be fan of the technology. But he also warns that the implications of intelligent, mobile devices have not been thought through.
"Robots could help nurses lift an old person or be used to protect you from cutting yourself with a knife or leaving your cooker on," he says, before going on to outline some risks.
"We need to look at privacy. Imagine an old lady living at home: she's in the bathroom, and there's one of these robots patrolling round looking for her, and you don't know who's looking through the camera.
"It comes into the bathroom and she's standing there naked. Well, who's at the other side of that? Is there somebody smirking and giggling? So there should be some way of knocking, a design that allows you to have your privacy and your dignity."
Awareness of the potentially negative impacts of robots has led designers to concentrate on ensuring that, for a start, they are safe.
Prof Sethu Vijayakumar, a director of the Edinburgh centre, shows me a project where a robot is being taught to avoid touching a human - whenever one of the researchers reaches towards the robot, the machine twists its arm out of the way.
For him, the research focus is on what is called "shared autonomy" - where humans remain in control while delegating certain tasks to robots.
That allows people to retain oversight but at the same time to exploit the strength and precision of the machines. Full autonomy lies in the much more distant future.
And what if a robot went rogue? I ask, eyeing a robotic hand strapped to his left arm and occasionally twitching.
"At this stage, there's very little risk of that," Prof Vijayakumar says.
"We still have significant control of the electronics, the Artificial Intelligence - and there's a kill button."
Along with "uncanny valley and "shared autonomy", "kill button" is a crucial phrase in the robotics field.
As long as people remain nervous of the machines malfunctioning or becoming too powerful, an off-switch - a last-ditch of human control - will always be needed.