My brief encounter left me astonished, above all, by the way in which to play was to start learning with the body as much as with the mind. Because everything happened in real time, in response to so many simultaneous layers of action, I was literally feeling my way into a new form of expression – aided by the underlying familiarity of the pattern beneath my fingers. Here was that rarest of things, a new musical instrument that might actually catch on among musicians; not to mention a cutting-edge technology pushing back against the digital dream of effortless manipulation. To shape its sounds well would demand effort, practice, artistry – and an individual human touch.
Most obviously, the future of interfaces like the Seaboard’s lies in applying its principles to other musical instruments. Beyond this, though, it also seemed to me to represent something far larger: the possibility of everyday computer interfaces able to respond to human hands with something of the incredible sophistication they themselves possess.
For this wasn’t so much the reinvention of the traditional concept of a keyboard – a word known to music long before its use in typing, let alone computing – as its augmentation into another dimension. The very idea of everyday computer interaction via such a tool may seem outlandish. Yet, even as it currently exists, the rubberised interface can be remoulded to fit almost any size or shape. Why shouldn’t we seek to expand the vocabulary of our interfaces in this way?
The current answer is as much prejudice and momentum than reason. Nobody, I’m sure, wants simply to type onto a spongy keyboard. But sensitivity to depth, pressure, vibration and texture might transform how we relate to countless complex tasks, ranging from games and simulations to graphical design and multi-dimensional data. How much more sophistication could be achieved via “soft” mouse buttons – and how much more intuitive feedback received? How much more effortless might it be to select, refine and manipulate objects onscreen not by tapping buttons, but by sinking one hand into a single, subtly responsive surface?
For now, these are just pipe dreams. Too often, though, we treat ourselves as disembodied when staring into our screens. It’s a fact reflected in almost all futurological musings on man-machine interfaces. While possibilities like natural speech recognition and mind-machine connections feature prominently in films and books, the basic facts of the human body often get short shrift. Yet, decades from today, the current crop of push-button keyboards and two-dimensional touchscreens may seem as crude as the early typewriters that still wield such influence: a template we cannot shake off entirely, but that need not ensnare us.
Though we cannot simply shake off Qwerty, we can remake it in our own image. There’s something marvellously compelling, for me, about the proposition that the most intricate biological tools we possess should be able to do more than tap against inert plastic keys or glass. We deserve an interface between man and digital machines that isn’t denuded of analog complexity – and that might redress an inadequacy so fundamental that we’ve stopped even thinking of it as one. As I’ve recently discovered, touching is believing.