It all started back in 1968 with a simple bet. A 23-year-old Scottish chess champion called David Levy was at a cocktail party. The party was being hosted by Donald Michie, founder of the University of Edinburgh’s department of machine intelligence and perception. Sitting beside Levy on the sofa was John McCarthy, the American who in 1955 had coined the term “artificial intelligence”.
The two got to chatting and, for fun, McCarthy challenged Levy to a round of chess right there and then. Levy beat him without much fuss. McCarthy, though, had a parting quip. Computers would be able to beat a champion like Levy within 10 years, he said. Levy laughed it off, but McCarthy stuck to his ground. So Levy took the opportunity and offered to make a bet. He wagered £500 – the equivalent of more than £8,000 ($12,000) today – to McCarthy if a computer could beat Levy before 1979.
“You know, I was earning £895 a year,” remembers Levy today, “But I was so confident.”
The game was on. The story of mankind’s battle with machines over a millennia-old game is rich with competitive spirit and technological surprise. But who, human or computer chip, would win the bet in the end? How did computers change chess as a competitive sport? And, more intriguingly, is tomorrow’s unbeatable player a combination of human and machine, drawing on the strengths of both?
Levy made his wager at a time when the rate of improvement in computing was high. His confidence aside, there were plenty of people like McCarthy who thought a machine could beat him.
‘My long term goal was further ahead than it could see… the program would tie itself up in knots’ – David Levy
In 1978, Levy found himself in Toronto, playing a computer opponent in a match which stands out as a significant milestone in the history of chess programming. He played five games against his foe. The first was a draw, but he won the second and third. Then he lost the fourth. If the computer won the final game they would at that point be tied, but Levy persevered – and he won.
And Levy kept winning. From the moment of agreeing the original bet with McCarthy, he remained unbeaten by computers in exhibition matches for 21 years. It wasn’t until 1989, during a match hosted by the British Computer Society in London, that Levy was finally toppled by a program called Deep Thought. Less than 10 years later in 1997, IBM’s supercomputer Deep Blue would become the first to beat a reigning world champion – Garry Kasparov – in a tournament style game. The achievement was reported around the world.
But even by 1989, computers were succeeding against chess grandmasters and Levy knew his unbeaten streak couldn’t last forever. He had, during all his years of practice, developed a few “anti-computer tactics” that had helped get him to that point. They wouldn’t necessarily provide any advantage against another human but could give a player the edge over a computer opponent.
“I called it ‘do nothing but do it well’,” he recalls. “What I did was I sort of sat there making moves that improved my position in tiny ways. […] The computer wouldn’t be able to understand what my long-term plan was.”
‘Crash through and crush it’
In those days, computer chess programs could plan only a restricted number of moves ahead, perhaps five or six. Levy would bide his time and look for opportunities that might let him take the lead.
“My long-term goal was further ahead than it could see and what would inevitably happen is that the program would tie itself up in knots,” he says. “Then, at some point in the game my position would be so strong that I could crash through and crush it.”
But anti-computer tactics did not become irrelevant after 1989, or even 1997. Boris Alterman, an Israeli grandmaster who became recognised for a series of matches against computers in the 1990s and early 2000s nicknamed his similar strategy, ‘The Alterman Wall’. For this, he would play defensively behind a row of pawns, knowing that with more pieces on the board, the computer found it harder to calculate an advantage because of the large number of possible moves to consider.
There were still important areas of the game in which humans could excel against a machine
“That is one of the, I would say, best tactics which I used in my famous games against computers,” comments Alterman.
Shay Bushinsky, a chess programmer who developed the program Junior along with Amir Ban, is a long-time friend of Alterman. He recalls working with him on Junior in the early 2000s, when there were still important areas of the game in which humans could excel against a machine.
Take this example, in which Junior (playing as white) is beaten by a highly defensive strategy from grandmaster Jeroen Piket (playing as black). One commenter writes, “great anti-computer chess! just watch the junior's play after move 12 – he accomplishes nothing and just moves around his pieces while piket [sic] builds up a crushing attack!”
And there were people who realised that chess programs weren’t always accurately coded in the first place. Simon Waters, a British chess enthusiast, found some programming errors in the popular free program GNU Chess.
“There was an issue with time-keeping, so at some points it would allow itself almost no time to think even when it had time on the clock,” he writes in an email, giving one example.
The problem with tactics and bugs, though, is once chess programmers know about them, they can be fixed or programmed against. “Later on [anti-computer approaches], inspired us to actually develop a new model where we better understood this phenomenon. We actually did that, helping us win two consecutive world championships later,” says Bushinsky.
There are so many billions of moves that can be made during a match that even computers have to analyse what’s happening on the fly
Chess programmers have been picking up on the insights of people like David Levy and Boris Alterman ever since that 1968 bet. In fact, chess programs today are so strong that even the best human player in the world (currently the 24-year-old Norwegian Magnus Carlsen) wouldn’t have a hope of winning a tournament match.
“In the last 10 years there have been massive improvements in terms of both the search and the evaluation,” says Mark Lefler, a programmer who works on the powerful Komodo chess program. "The search is highly selective now and they chop off parts of the tree [of possible moves] or greatly reduce them if a program thinks a move isn’t particularly good," he says.
“The critical lines can be searched much, much more deeply,” he adds.
Chess is a game that still hasn’t been ‘solved’. There are so many billions of moves that can be made during a match that even computers have to analyse what’s happening on the fly. To get better at this they have to gradually “learn” – or build up databases – about the potential outcomes of any move in any situation.
“I have a living room full of computers constantly playing games trying to prove that an idea is better than what we already have,” says Lefler.
Overnight, Lefler’s six computers play through over 14,000 games each during an eight-hour period. “Six machines times 14,000 games is a lot of games,” he says. And with every game played, the database gets deeper and richer.
There is even sporting interest in watching computers play against one another
The result of Lefler’s busily whirring machines is the ever-increasing prowess of Komodo. There is a rating system for chess ability that applies to both humans and computers called the “Elo rating system”, after Arpad Elo, a physics professor who invented it.
Magnus Carlsen’s Elo is currently 2,850 while Komodo’s sits comfortably above that – at 3,350. These ratings change over time based on games won, lost and drawn against opponents, taking the opponent’s own Elo into account. For example, for a player with an Elo of 1,400 would get a bigger boost by drawing against Carlsen than someone – or a computer – with a comparable Elo to the world champion who achieved the same result.
Alterman has long ceased trying to beat computers. “It just becomes like in athletics to run against a machine – it is almost impossible for humans to play,” he says. But has competitive play against computers died as a result? Far from it. David Levy points out that today, so-called “odds matches”, in which the computer starts off with, for example, one pawn less than the human player, are currently quite popular. Komodo has played a series of grandmasters under such conditions.
And, it’s worth noting that there is even sporting interest in watching computers play against one another. This has been the case for some time and it’s now common for chess computer fans to keep track of and discuss machine-vs-machine bouts in online forums.
But instead of the computer always being the opponent, what if it were designed to assist a human player so that the advantages of both kinds of brain were used at once? Such a system – a kind of chess cyborg – could elevate even novice players to greatness.
And that’s exactly what happened to two amateurs in 2005. Steven Cramton and Zackary Stephen were chess buddies who met at a local club in New Hampshire in the US. They had spent a few years honing their skills at the game and Stephen, in particular, was keen on chess programming.
They entered a “freestyle” tournament that year which attracted several teams of grandmasters aided by computers. The tournament was played remotely, online, via the servers of Playchess.com.
Both Cramton and Stephen were amateurs, they had day jobs and were effectively unknown in the world of competitive chess. But they had some clever tricks up their sleeve. For one, they had developed a database of personal strategies that showed which of the two players typically had greater success when faced with similar situations.
‘We had really good methodology for when to use the computer and when to use our human judgement’ – Zackary Stephen
“We did have a really extensive database that I worked on for four or five years before that,” remembers Stephen. “Steve had contributed to it.”
They also had three PCs chugging through the numbers and these had been specially prepared by Stephen. But crucially, they knew how to actually play a cyborg game.
“We had really good methodology for when to use the computer and when to use our human judgement, that elevated our advantage,” Stephen says.
And in the end, it all paid off – they won the tournament, leaving grandmasters and some well-known programs in their wake. It was quite a shock but it proved the theory worked: certain human skills were still unmatched by machines when it came to chess and using those skills cleverly and co-operatively could make a team unbeatable. Humans playing alongside machines are thought of as the strongest chess-playing entities possible.
It’s a fact which saves mankind, and rightly so, from the ignominy of simply being beaten by computers at a game we have been practising for millennia. But it’s also fair to note that computers have probably irreparably changed the way that chess is played forever. In recent years there has been a worrying rise in cheating at tournaments, many such incidents involving the use of computers. In April this year, one grandmaster was caught apparently cheating after he kept leaving a match to go to the bathroom. Officials say they found an iPod Touch with a chess app they believe was being consulted during toilet breaks. And in September, an Italian was discovered to be using an elaborate system involving a camera, Morse code and an accomplice armed with a chess program to cheat during a tournament.
Chess has always been and will always be fiercely competitive. The computers may indeed be too strong to beat. But Bushinsky makes an interesting comment. Because young players today don’t even think of challenging them, the era of machines improving out of a direct response to plucky human rivalry may have come to an end. The fact we’ve become too scared to play them may be a brilliant move in itself…
Follow us on Facebook, Twitter, Google+, LinkedIn and Instagram.