Computers can now identify what you will say before you know yourself, and as philosopher Evan Selinger argues, this could subtly shape your conversations without you realising.

Do you know what you really want? Right now, there are computers all over the world busily trying to tell you the answer – often before you know yourself.

If you’ve bought books or music on Amazon, watched a film on Netflix or even typed a text message, then these mind-reading machines may have steered you to that choice by making recommendations. These predictive algorithms work by finding patterns in our previous behaviour and making inferences about our future desires – and they are everywhere.

These algorithmic mind readers are now turning to a new task: anticipating what you’re going to type next. It’s autocorrect on steroids, and promises to shape our behaviour in unexpected ways. What will happen to communication when algorithms offer to speak for us?

When it comes to writing, we’ve become increasingly dependent on autocorrect and autocomplete technology to identify typos and misspelled words, predict replacements, and suggest subsequent words or search terms. When it works well, we hardly give it any thought. It recedes to the background like an invisible fairy godproofreader who is “always reading over our shoulders.”  

Last year, Apple introduced what it described as the next stage in this technology: QuickType, which is supposed to predict “what you’re likely to say next. No matter whom you’re saying it to.” Apple is so pleased with the product it says the tool yields “perfect suggestions.”

In simple cases, QuickType offers obvious, generic possibilities. If someone sends you a text asking if you want to go to dinner at 5 or 6, it might fill the three suggestion sections with “5,” “6,” or “not sure.” In principle, however, the technology can do much more.

Apple represents QuickType as so contextually sensitive it adapts its recommendations to the different styles we use when talking with different people. Ostensibly, it can determine that “your choice of words is likely more laid back with your spouse than with your boss” and act on that information.  

Text harbinger

As to the accuracy of QuickType, well, skeptics are crying overstatement. And just as autocorrect fails have become the butt of widely circulated jokes and privacy laments, we’re already seeing humorous depictions of QuickType falling short. Absurd QuickType songs have even been composed.  

But even if the current version of QuickType is glitchy, others will improve over time. By then we’ll probably be even more accustomed to using predictive technology to cope with the demands of personal and professional life. That’s why we need to think carefully about whether we should want a highly functioning model.

One of the implications of our future with predictive text is that it could transform us into what I call personalised cliches.

What does that mean? Let’s start by considering the definition of a cliche: the Oxford English Dictionary reminds us that cliches are aren’t just “commonplace phrases” and “stereotyped characters,” but also words and expressions appropriately described as “hackneyed.” Bereft of depth and significance, cliches feel dull and flat, so tedious as to be tiresome. I’d say that clichés are a “dime a dozen”, but of course that would be…cliche. So, instead, I’ll remind you of Clive James’s much fresher observation: “The essence of a cliche is that words are not misused, but have gone dead.”  

Often without even realising it, we’re all prone to lapsing into cliches when speaking and thinking quickly. When poetic license and creativity aren’t called for, it’s efficient – if not reassuring – to go for the cookie-cutter and conventional formulation. Cliches can be linguistic shortcuts that allow us to communicate without having to think too deeply about the words we use.

Shallow thought

And that brings us back to autocorrect and autocomplete’s future: by encouraging us not to think too deeply about our words, predictive technology may subtly change how we interact with one another. As communication becomes less of an intentional act, we give others more algorithm and less of ourselves. This is why I argued in Wired last year that automation can be bad for us; it can stop us thinking.

When predictive technology learns how we communicate, finds patterns specific to what we're inclined to say, and drills down into the essence of our idiosyncrasies, the result is incessantly generated boilerplate. As the artist Salvador Dali famously quipped: “The first man to compare the cheeks of a young woman to a rose was obviously a poet; the first to repeat it was possibly an idiot.” Yet here, the repetition is of ourselves. When algorithms study our conscientious communication and subsequently repeat us back to ourselves, they don’t identify the point at which recycling becomes degrading and one-dimensional. (And perversely, frequency of word use seems likely to be given positive weight when algorithms calculate relevance.) 

Of course, we have already experienced techno-panic about texting before. Early worries about texting focused on whether conversing by shorthand would weaken our linguistic abilities, and it turned out to be unfounded. The next stage of autocorrect and autocomplete, I would argue, is different. Predictive texting won’t make our abilities atrophy, but if we over-used a highly functioning version of it, it’s possible that we submit to something dehumanising.

When connecting with each other, we should remember that while we can be consistent in various ways, we’re not uniform, mass-produced goods condemned to reductive, bland, and uninspiring conversations. We’ve got spontaneous things to say that we never anticipated; declarations and questions that require careful, nuanced, and newly formulated expression.

However the future unfolds, perhaps if we can simply label the potentially corrosive effect of algorithmic conversation, it will be easier to resist the seduction of expediency. George Orwell might have warned of the dangers of embracing “a lifeless, imitative style” in an essay back in 1946, but we all could use a refresher. Otherwise, it’s possible that we risk becoming the object of a data scientist’s fantasies: a deadened self-parody.

Share this story on Facebook, Google+ or Twitter.

Update: This article has been edited to clarify the difference between autocorrect and autocomplete.