Q-W-E-R-T-Y. Six letters that define so much of our waking lives.
If they are not there on the screen in front of you, chances are they are only a click away.
In some ways, these six letters are a triumph of design. They’re wired into our brains, replicated on keyboards, phones and tablets across the world – and have changed very little since Milwaukee port official Christopher Sholes used the layout to stop mechanical levers jamming on a 19th-Century typewriter.
In another sense, though, the over 140 years of continuity embodied in keyboards show a strange tension at work behind technology’s claims of progress and perfectibility. And it’s the same for other interfaces. The mice attached to almost every desktop system in the world still conform to the same essential design set out in the 1965 paper on “computer-aided display control” that coined the term. Even touchscreens ape established layouts and conventions.
Appropriately enough, the name for this inertia is the “qwerty phenomenon. Some things simply seem to be too deeply and universally engrained to be susceptible to change, even if there would be numerous advantages in doing so. Having found a design that largely fitted our early needs, we gave up on alternatives (the Dvorak Simplified Keyboard, patented in 1936, is the only other option with any global following).
Yet it’s not just the physical conventions locked into our devices that matter, but the assumptions bound up with them – and the way these assumptions define as well as serve our purposes.
Take the way these six letters encourage us to treat our hands. The 27 bones, over 60 muscles and tendons, and three nerves of the human hand are sensitive to minute variations in pressure, velocity, position, temperature and texture. They are effortlessly able to execute three-dimensional manoeuvres while sensing and responding to all of these. Yet, in computing terms, all this incredible bandwidth is usually funnelled into tapping on keys able to recognise only two information states – on and off. Even the most advanced touchscreen is barely able to register five fingers’ worth of contact points on its textureless, depthless surface.
There are, of course, a few notable exceptions. But, by the standards of pre-digital technologies like musical instruments, the way in which we bring our abilities directly to bear on our creations is laughably crude. It is a little like asking an orchestra to perform Beethoven by banging their instruments against the ground.
And this may help explain the nature of one of my most illuminating recent tech experiences when, earlier this month, I was speaking at a conference organized in London by the Economist magazine. Some of the world’s best digital brains were present. Yet, between talks, I kept finding myself drawn away from conversation and towards a strange musical instrument sitting, together with two of its creators, in one corner of the central reception room.
For someone who has played the piano for twenty-five years, the Seaboard was an exquisitely bizarre encounter. A sleek black piano keyboard with a ribbed and rubberized surface, it looked like a silicon mould for making music-themed desserts – and felt, when I was graciously allowed to sit down and play, like massaging a giant bag of jelly sweets. Digging my fingers into the (startlingly robust) keys mixed familiarity with sudden ineptness. Onto the concept of a piano had been grafted several entirely new layers of physical interaction.
What was truly remarkable was the degree of control on offer. According to London-based manufacturers ROLI, the “soft threedimensional surface that enables unprecedented realtime, intuitive control of the fundamental characteristics of sound: pitch, volume, and timbre”. In other words, you can change the character of each note you’re playing by literally flexing and vibrating your fingers as they press into the rubbery keys, while simultaneously undertaking the standard keyboard business of pressure sensitivity and mechanical action. Instead of laboriously applying effects to your music post-recording, you can generate them in real time without lifting your hands from the instrument.
My brief encounter left me astonished, above all, by the way in which to play was to start learning with the body as much as with the mind. Because everything happened in real time, in response to so many simultaneous layers of action, I was literally feeling my way into a new form of expression – aided by the underlying familiarity of the pattern beneath my fingers. Here was that rarest of things, a new musical instrument that might actually catch on among musicians; not to mention a cutting-edge technology pushing back against the digital dream of effortless manipulation. To shape its sounds well would demand effort, practice, artistry – and an individual human touch.
Most obviously, the future of interfaces like the Seaboard’s lies in applying its principles to other musical instruments. Beyond this, though, it also seemed to me to represent something far larger: the possibility of everyday computer interfaces able to respond to human hands with something of the incredible sophistication they themselves possess.
For this wasn’t so much the reinvention of the traditional concept of a keyboard – a word known to music long before its use in typing, let alone computing – as its augmentation into another dimension. The very idea of everyday computer interaction via such a tool may seem outlandish. Yet, even as it currently exists, the rubberised interface can be remoulded to fit almost any size or shape. Why shouldn’t we seek to expand the vocabulary of our interfaces in this way?
The current answer is as much prejudice and momentum than reason. Nobody, I’m sure, wants simply to type onto a spongy keyboard. But sensitivity to depth, pressure, vibration and texture might transform how we relate to countless complex tasks, ranging from games and simulations to graphical design and multi-dimensional data. How much more sophistication could be achieved via “soft” mouse buttons – and how much more intuitive feedback received? How much more effortless might it be to select, refine and manipulate objects onscreen not by tapping buttons, but by sinking one hand into a single, subtly responsive surface?
For now, these are just pipe dreams. Too often, though, we treat ourselves as disembodied when staring into our screens. It’s a fact reflected in almost all futurological musings on man-machine interfaces. While possibilities like natural speech recognition and mind-machine connections feature prominently in films and books, the basic facts of the human body often get short shrift. Yet, decades from today, the current crop of push-button keyboards and two-dimensional touchscreens may seem as crude as the early typewriters that still wield such influence: a template we cannot shake off entirely, but that need not ensnare us.
Though we cannot simply shake off Qwerty, we can remake it in our own image. There’s something marvellously compelling, for me, about the proposition that the most intricate biological tools we possess should be able to do more than tap against inert plastic keys or glass. We deserve an interface between man and digital machines that isn’t denuded of analog complexity – and that might redress an inadequacy so fundamental that we’ve stopped even thinking of it as one. As I’ve recently discovered, touching is believing.