Google’s Open-Source AGI is Disruptive

Lee Se-Dol, one of the greatest modern players of the ancient board game Go, makes a move during the third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo
Newport Beach, CA

The DeepMind Challenge victory demonstrates that Google’s open-source “artificial general intelligence” has just launched a new era of disruptive technology.

Eighteen-time world Go champion Lee Se-dol made the man-versus-machine Google Deep Mind Challenge interesting by defeating the AlphaGo computer on March 13, after three straight losses. A close final sent the five-game series to overtime, but AlphaGo eked-out the “W” to give the Google-owned British company DeepMind a 4-1 victory and lots proof of concept of bragging rights.

Google is seeking to be the leader in computers that can learn without pre-programming, referred to as “artificial general intelligence” (AGI). What makes DeepMind’s triumph so compelling is that its AlphaGo computer didn’t win through brute force, because the game of Go, with an average of 150 moves, contains about 10170 board combinations. Because that number of alternative moves is more than all the atoms in the Universe, traditional computer algorithms that search every possible move are not time competitive for a human game.

IBM’s Deep Blue computer beat world-chess-champion Gary Kasparov in May 1997 with preprogrammed moves that the Grand Master thought were too sophisticated for a computer. But according to the globalguerrillas blog, AlphaGo won the most recent match by learning how to play the game from scratch by employing three disruptive strategies:

  • AlphaGo used a “model-free” approach to the game that allowed the AGI program to build from scratch complex models that human programmers can’t design based on algorithms built on assumptions;
  • AlphaGo learned the game by interacting with “big data” from 30 million games played by humans, rather than following programmer’s inputs; and
  • AlphaGo engaged in “big sim” by playing itself on 50 computers night and day until it had “learned” enough to beat a human grandmaster.

Because AGI computers do not just replicate and master what they have been imputed, these learning machines have the capability to innovate independently. The AGI techniques employed by DeepMind to the game of Go are generic to machines “learning” millions of human activities that are currently referred to as “jobs.”

Just as Google gave away its Android operating systems in order to capture 80 percent of the smartphone market, Google announced on November 2015 that it is offering open-source access to its platform for machine learning, called “TensorFlow.”

Google acknowledges that it is already running some elements of TensorFlow in over 50 of itsexisting products as a tool to harness the disruptive power of deep neural network learning machines.