The development of computer systems that could hold their own against chess grandmasters was a major step toward the development of artificial intelligence. Another step was taken when Google’s AlphaGo program beat grandmaster Lee Se-Dol at Go, the far more complex Chinese strategy game. AlphaGo’s designers professed themselves astounded by its victory, which arrived a decade earlier than they anticipated.
No one quoted in the article on AlphaGo wants to use the E-word, but the casual observer might say this program is evolving on its own.
“Part of the reason for AlphaGo’s success is that it is partly self taught — having played millions of games against itself after initial programming to figure out the game and hone its tactics through trial and error,” AFP explains.
After playing with itself (if you’ll pardon the expression) a few million times, AlphaGo developed a level of skill at Go which exceeded the parameters set by its designers. It warmed up with a victory against European champion Fan Hui before tackling Lee Se-Dol. (That earlier European championship bout wasn’t a narrow victory, either – AlphaGo crushed Fan Hui, winning all five games in their match.)
The Verge reports that after AlphaGo took the trophy with three straight victories in a five-game match, Lee came back with a victory on Sunday… because the computer program made some mistakes it promptly learned from. According to the founder of the DeepMind unit that created AlphaGo, it made a mistake on move 79 of the game and realized its error by move 87. It won’t make that mistake again.
Lee, who has been declared world champion of Go eighteen times, entered the match predicting he would win all five games… and ended up hoping he could beat the odds to win a single game.
The Go Game Guru website has an overview of the strategy Lee developed to win, which will be of interest to aficionados of the game. In short, he won by turning AlphaGo’s sublime ability to calculate the odds of victory against it, then pulled a pivotal move so unexpected that the expert commentators covering the game didn’t see it coming. Evidently one of the few remaining weaknesses in the computer’s play style is that it has difficulty setting up, or anticipating, moves that grant only marginal benefit, as opposed to moves that can more clearly be projected as “good” or “bad” in future turns of the game.
Lee stood to win a million dollars if he had prevailed, but since AlphaGo already won the three games needed to triumph in the contest, Google will donate the million dollars to charity.
Maybe they should use that million bucks to fund therapy for people who fear the artificial intelligence revolution is nearly upon us. The significance of winning the highly intuitive game of Go, rather than the more coldly logical game of chess, cannot be overstated.
A.I. enthusiasts are delighted by the self-learning AlphaGo’s ability to radically exceed its creators’ expectations. “I don’t see why we would speak about fears. On the contrary, this raises hopes in many domains such as health and space exploration,” said A.I. specialist Jean-Gabriel Ganascia of the Pierre and Marie Curie University in Paris, as quoted by AFP.
“It is not the beginning of the end of humanity. At least if we decide we want to aim for safe and beneficial AI, rather than just highly capable AI,” said Oxford University futurist Anders Sandberg.
That might not be the most soothing way to put it, since the alarming thing about self-learning machines is the way our expectations and intentions may not matter.
A computer program that performs fully ten years ahead of schedule is amazing, given how much development occurs in a year. For many futurists, self-learning beyond all anticipation by programmers is one of the harbingers of true artificial intelligence. Suppose our assumptions about A.I. are based on projections that we won’t develop the ability to create a living machine mind for another fifty years. What happens if we create a lesser program that can generate living code for us, decades ahead of schedule, the way AlphaGo just won games it shouldn’t have been able to win just yet?
What if a machine makes the machine we don’t know how to make yet, by learning from the successes and failures of its designers… and what if it does that while humans aren’t expecting it?