by Ian Rogers
Demis Hassibis was a promising English junior in the early 1990s but retired from the game at 27 when his rating was lower than it had been at half that age. By then he had already made a small fortune as a video game developer, his first success coming as a teenager with the game Theme Park.
Hassibis moved through videos games into the field of artificial intelligence and in 2011 founded a company called DeepMind Technologies.
In 2014 Google acquired DeepMind for close to $A1b but Hassibis remained as a manager of AI projects.
This week Hassibis’ work gave chess – the game that lost some of its magic in 1997 when Garry Kasparov was beaten by the computer Deep Blue (documentary below)– an ego boost by unveiling a computer that beat an elite go player.
Dominated by Japanese masters for centuries until the 1970s, go is a board game which could boast, correctly, that it was more complicated than chess and nearly impossible to program a computer to play well.
The world’s top go player, Lee Sedol, claimed “Go is incomparably more subtle and intellectual [than chess]” and computer programmers had to agree. (Go, played by placing stones on a 19×19 board, offers no piece tallies or king attacks to guide a computer’s evaluations – positions are judged by space and territory.
In recent years computers have mastered not only chess but also other games thought to be computer-proofed such as backgammon, poker and scrabble. However go remained the final frontier.
The world’s best go computer Crazy Stone had defeated some top Japanese players, but only after being given a handicap of at least four stones.
Yet this week Hassibis’ baby, AlphaGo, son of DeepMind, took on and beat top European go player Fan Hui (below, right) 5-0, with all games starting on even terms.
AlphaGo’s feat is far less dramatic than Deep Blue’s feat in 1997. – Fan Hui, though one of the very best in Europe, could not match the best players from Korea, China and Japan, yet the 5-0 winning margin is impressive indeed.
It has been suggested that AlphaGo would struggle at the slower time limits in Japan, where a game can take more than a day and human mistakes can be minimised.
Google’s planned next victim is the great Sedol (below), with a match tentatively scheduled for March (probably at Korea’s fast time limits).
However just as Deep Thought (later Deep Blue) mastered lightning chess first before eventually conquering classical time limits, go players can expect only a modest respite before AlphaGo conquers the final go hurdles.
Whether that respite is months or years is in some respects irrelevant, as my veteran History and philosophy of Science lecturer at Melbourne University in 1978 tried to explain to me as he failed me for missing an examination which clashed with the Buenos Aires Olympiad. “When a computer sees 64 moves ahead, why will anyone bother playing chess?” he asked me. My answer mentioning how unlikely his computer of the future would be – throwing in an obligatory mention of the number of atoms in the universe versus the number of possible opening moves – did not prove his essential point about a superior calculating machine eventually overwhelming humans wrong. Indeed when, a quarter of a century later, the greatest chess computer of all time – Hydra – overwhelmed Michael Adams 5.5-0.5 in a match, it needed to calculate fewer than 10 moves ahead for each side (with only the most basic chessplaying software) to turn the match into a non-contest.
For Hassibis, DeepMind’s success is just a stepping stone towards AI applications such as medical diagnostics, just as the all-conquering chess computer Hydra went on to run a pubic transport system.