One of my favourite Isaac Asmiov stories, Franchise, imagines an election in which computing is sufficiently advanced for the preferences of an entire country to be predicted on the basis of just one voter’s actions.
We’re not quite at that stage yet. But we may be on the right path. For perhaps the greatest geek triumph of the 2012 presidential elections was the unlikely figure of statistician Nate Silver, whose FiveThirtyEight blog – which algorithmically assessed hundreds of polls based on their historical accuracy – managed to successfully predict the result in 50 out of 50 states.
His analysis – like every political story – divides opinion. To my mind, though, his work shines a light on a bigger story about our future relationship with technology, and in particular on a vision of progress where there’s an increasingly clear divide between those endeavours that can safely be left to humans, and those where machines and mathematics are preferable.
It’s something that is already happening. From automated explorations of Mars, the use of unmanned drone aircraft for reconnaissance and remote assassination, to the analysis of probabilities and prediction. We do what we’re best at, and leave the rest to the machines. Some things have ever been thus, but never has the story of human enhancement been quite so closely entwined with the story of human redundancy.
In Silver’s case, the people who may face immediate redundancy are those professional political pundits whose speculations saturate the media at election time – or at least replacement by suitably ideologically varied Silver-like figures next time around. In the longer term, though, the wholesale replacement of speculation with massively data-led science may be in order – not to mention the transformation of what it means to plan as well as to predict a political campaign.
The Obama team’s massive level of behind-the-scenes data crunching is well known: “we ran the election 66,000 times every night”, as a senior official put it to Time magazine, describing their simulation of swing state votes. What’s less knowable but far more important, though, is just how much of a role this played in shaping victory.
It is not just in the world of politics that so-called “big data” is having a disruptive effect, of course. Increasingly these mathematical models, backed by terabytes of data, are being used to help with everything from predicting hurricanes and diagnosing disease to spotting economic trends and modelling human behaviour.
All of which is beginning to fulfil a long-established vision of what happens when sufficiently sizeable numbers and computing power meet humanity. Companies like Google have long reckoned that there’s more to the pursuit of computational truth than wishful thinking, with ongoing research busy demonstrating that computers can drive cars more competently than us, can work out what people are thinking without them going to the bother of typing, and deliver ever-more-targeted products and services.
For some, the victory of logic over punditry is a small step in an important direction towards better understanding what it means to understand ourselves. As the web-comic xkcd pithily put it the day after the US election results, “breaking: to surprise of pundits, numbers continue to be best system for determining which of two things is larger.”
At stake, though, are metaphysical as well as mathematical issues; something that – inevitably – leaves some commentators feeling uncomfortable with equations’ capacity for more competently measuring the world than us. “The soul,” US Conservative author Jonah Goldberg argued in the Chicago Tribune, “is not so easily number-crunched”.