Other researchers are working on "emotionally intelligent" interfaces that use cameras to read users' emotions. By analysing facial expressions, these systems can spot universal characteristic signs of anger, confusion or other feelings. Emotions give rise to very similar facial expressions across different cultures, so such systems could be used anywhere, says Peter Robinson, professor of computer technology at the University of Cambridge.
Call centre staff are routinely given training or scripts to help them deal with angry customers, and teachers use facial expressions to understand how well their students are coping with lessons. Researchers are developing systems that provide computers with similar information using algorithms that analyse the position of features such as the mouth and eyebrows in images of a user's face. "If a computer can tell that the student is confused then it could adopt the same techniques as a human teacher, perhaps by presenting information differently, or by providing more examples," says Robinson.
While these kinds of systems try make the interface almost invisible, another option tries to integrate existing technologies like cameras, sensors, screens and computers into everyday objects. This approach – known as "tangible computing" - would allow anyone to interact with computers using physical “things” rather than through special input devices. For example, you could indicate your chess move to a computer by moving a piece on a physical board rather than by entering your move using a keyboard and mouse.
Already, researchers at the Massachusetts Institute of Technology in the US are beginnign to explore the extremes of this notion, proposing everything from building blocks and trays of sand to sheets of malleable material and levitating balls to control on-screen experiences.
"The mouse and keyboard won't go away completely as they are an extremely fast and efficient way of interacting with computers, but we are going to see a lot more manipulating and placing of real life things," says David Kurlander, formerly of Microsoft's User Interface and Graphics Research Group. "We'll also see more pointing, speech, and combinations of these." He also predicts that flat surfaces such as tabletops, walls or windows will be used as display screens, with images projected from personal projectors mounted on clothing or worn around the neck.
Gamers have become used to controlling games consoles with physical movements using devices such as the Kinect - Microsoft's motion-sensing device that can track the movement of objects in three dimensions using a camera and depth sensor. And the software giant has even created software that could allow people to use a Windows 8 machine using Kinect. Inevitably, comparisons are drawn with the futuristic user interface used by Tom Cruise's character in the film Minority Report to manipulate vast swathes of information and swipe it around multiple virtual screens using extravagant arm movements. However interface experts are sceptical of Hollywood's take on the future.
"People tend to be lazy and large arm movements are very tiring, so I think it is very doubtful that people will ever be communicating with computers using dramatic gestures when they could achieve the same results with a mouse or their voice, or some other way, such as tiny finger movements," says Kurlander.
This last idea is the concept behind Leap, a small 3D motion sensing device that sits in front of the a computer and allows users to browse websites, play games or use other software using finger and hand movements. An impressive promotional video released earlier this year by its creators Leap Motion, based in San Francisco, shows examples including someone navigating around a satellite image of a city with mid-air swipes and pinches, and then using thumb movements to fire a gun in a first person shoot 'em up game. The $70 gadget is due on the market early next year.