So if knowing plenty of facts and making hollow small talk are not skills that allow a computer to pass the Turing test, what aspect of "human-like intelligence" is being left out? The one thing we have that computer programs do not: bodies.
According to French, intelligence actually floats on a "huge sea of stuff underneath cognition" – and most of it consists of associative sensory experiences, the "what it's like"-nesses built up by a physical body interacting with a physical world. This "subcognitive" information could include the memory of falling off a bike and skinning your knee, or biting into a sandwich at the beach and feeling sand crunch between your teeth. But it also includes more abstract notions, like the answer to the following question: "Is 'Flugly' a better name for a glamorous actress or a teddy bear?"
Even though "flugly" is a nonsense word, almost any English-speaking human would pick the teddy bear, says French. Why? "A computer doesn't have a history of embodied experience encountering soft teddy bears, pretty actresses, or even the sounds of the English language," French says. "All these things allow human beings to answer these questions in a consistent way, which a computer has no access to." Which means any disembodied program has an Achilles heel when it comes to passing the Turing test.
But that may soon change. French cites "life-capturing" experiments, like MIT researcher Deb Roy's ongoing efforts to record every waking second of his infant son's life, as a possible way around the embodiment problem. "What would happen if a computer were exposed to all of same the sights and sounds and sensory experiences that a person was, for years and years?" French says. "We can now collect this data. If the computer can analyse it and correlate it correctly, is it unreasonable to imagine that it could answer 'Flugly'-type questions just like a human would?"
French does not think on-the-fly analysis of such a massive dataset will be possible anytime soon. "But at some point in time, we're going to get there," he says. At which point – assuming it works, and a computer program passes the Turing test – what will this mean in practical terms? Will we deem the device intelligent? Or will we simply add “have a convincing conversation” to the ever-growing list of interesting things that computers can do, like “beat humans at chess” (as IBM’s Deep Blue did in a match with Garry Kasparov in 1997) or “play Jeopardy! on television” (as Watson did in 2011)?
The renowned computer scientist Edsger Dijkstra said that "the question of whether machines can think is about as relevant as the question of whether submarines can swim." Semantics and philosophy of "intelligence" aside, a computer that can pass the Turing test can do exactly one thing very well: talk to people like an individual. Which means that the Turing test may simply be replaced by a different question, one that is no less difficult to answer. "We can imitate a person," says Brian Christian. "But which one?"