For the past few days, I and many others have been riveted by the match between Google Deepmind’s AlphaGo and Lee Sedol – one of the top 5 Go players in the world (he is variously listed as world champion as well as current world no.2, but it’s not as easy to figure out the Go hierarchy as opposed to chess. There are elo ratings for Go, where Lee is ranked no.4, but these don’t seem to be official).

Go is by any measure the most difficult board game designed by man. On a 19 x 19 board, it has a calculated game-tree complexity of 10^360 (whereas chess is estimated to be 10^40 and the number of atoms in the observable universe is estimated to be 4×10^80), which is beyond the brute force calculating power of computers currently. It is highly dependent on intuition – the SGH Diagnostic Bacteriology blogger, who is an amateur player but who follows the game development closely, remarked to me how Go commentary by the top players is generally vague and hesitant, devoid of the long move-by-move calculation of chess commentary. This intuitive, horizon-scanning way of playing the game well has made it virtually impossible for computers to master the game, until now. In chess, where concrete calculation is probably more important than long-term strategy, a dedicated “chess machine” Deep Blue had defeated World Champion Garry Kasparov as early as 1997, and now, even a copy of chess programme Stockfish (available free) on a handphone can defeat all but the strongest of grandmasters.

Alphago was developed with a unique approach to mastering Go. Besides the usual Monte-Carlo algorithms, it also combined policy and value networks for Go that were coupled with reinforcement learning, such that it could in principle keep getting better and better by playing against itself. This is described accessibly in a blog post by Google Deepmind’s founders here. When Alphago defeated European Go champion Fan Hui in October 2015, many were impressed but still thought that humans had the edge. Fan Hui was after all only second dan (there are 9 dan) in 2015, and the difference between his strength and the top Go players is somewhat like the difference between a strong international master and Magnus Carlsen. Many like myself thought it would take a couple of years more before the machines could comprehensively defeat humans at this game. Lee Sedol himself felt that he could defeat the machine when interviewed just prior to the match, based on what he saw of its games.

But we had all underestimated the self-learning power of Alphago. The machine that is currently playing Lee Sedol is a completely different beast from the machine that played Fan Hui in October 2015. It has already won the 5-game match, defeating Lee 3-0. Everyone understood that it had certain advantages – that it would never get tired and it would always be emotionless and objective, factors that matter significantly in games between humans – but by the third game, well described in this Gogameguru blog, most of the top Go players understood that Alphago is fundamentally stronger than any known Go player. This other article in Wired.com is also worth reading, and its title – “The Sadness and Beauty of Watching Google’s AI play Go” – says it all.

What will happen next? If the history of chess is any indication, strong Go programmes that can run on any platform (like tablets and phones) will arrive over the next several years. More importantly, the way Go is played will change, much like how strong chess players have adapted to the incursion of strong chess-playing programmes. We chess players have improved because the computer programmes have taught us different ways of assessing and playing the game, and chess has not become less worthwhile a game just because computers now play it better than humans. Human players will improve and advance as a consequence of machine learning.

More excitingly, such artificial intelligence (AI) techniques can then be used in other areas. In my own field, areas such as infectious diseases outbreak forecasting or modeling the courses of these outbreaks given different interventions are virtually in their infancy and terribly imprecise. Adapting machine learning seems to be the way to go in the future. Heretical as it sounds, many parts of a doctor’s job can be done better (and more safely) by such programmes. IBM’s Watson is probably already a better oncologist – as far as choosing treatment algorithms go (not the hand-holding and listening part of patient care) – than most human cancer specialists. There is no reason to think that trained AI programmes cannot become better diagnosticians and antimicrobial prescribers than most if not all infectious diseases specialists. In the way that chess and Go players do not lose all credibility or their jobs when computers learn to play these games better than them, doctors and epidemiologists should be able to adapt and improve rather than be sacked if and when specific AI programmes become available.

We should be able to gain considerable mileage from AI development and application before something like Skynet arises.

Advertisements