The Chinese game of Go is one of the simplest, yet most complex games in the world. The objective of the game is to surround empty spaces or your opponent's pieces (called stones) with your own, and remove surrounded pieces from the board. The complexity arises from the 19 by 19 grid containing 361 squares. The number of possible moves in a game exceeds the number of atoms in the universe. 

Google's AlphaGo clandestinely defeats some of the top Go players in the world.  Image source: Pixabay.

In an online community known for attracting world class Go players, a previously unknown and enigmatic opponent going by the moniker "Master" emerged. Master played and won more than 50 consecutive games against some of the top players in the world. Among those players was Ke Jie, described as a 19-year-old prodigy, currently the top ranked Go player in the world. The unusual style of play led some to speculate that the Master might not be human. Those suspicions were confirmed in a tweet by Demis Hassabis, the CEO of DeepMind, a division of Alphabet's (GOOG -1.10%) (GOOGL -1.23%) Google. The mysterious Master was AlphaGo, DeepMind's artificial intelligence-based (AI) system.

Would you like to play a game?

Deep Blue -- the chess-playing computer. Image source: Flickr user Gabriel Saldana.

Computers have been defeating their human counterparts for some time. In 1997, Garry Kasparov was defeated by IBM supercomputer Deep Blue in a chess match. This was the first time that artificial intelligence had defeated a reigning world champion. Likewise in 2007, scientists created a computer program called Chinook that cannot be beat at checkers. It evaluates every possible move and even if its human opponent plays flawlessly, the best they can hope to achieve is a draw. So how is this different? In earlier victories, the computer program "memorized" every potential move and mathematically calculated the odds of success for each. By using the "brute force" technique, the computer was able to quickly determine the outcome of every potential move and choosing the one that leads to success. Deep Blue processed 200 million board positions per second in determining its next move. On the other hand, AlphaGo contains a neural network that mimic's the human brain's capacity to learn. These software models and complex algorithms allow the system to "learn" from its past mistakes and use what it has learned going forward. AlphaGo made headlines last year by beating the second-ranked player in the world at Go, a feat many thought impossible. 

Life is short, play more

Games are the most effective ways for scientists to develop and test their systems. By gauging the system's ability to conquer game tasks, they are able to more accurately assess how successful a system will be at providing solutions for real-world challenges. Artificial intelligence systems have been playing and winning increasingly complex games that enable real-world applications. Learning to play the game Civilization enabled a computer to better understand language by having the system digest the manual and apply the rules to playing the game. This led to advances in natural language processing.

DeepMind's AI-based Deep-Q Network was loaded with 49 Atari games and not given instructions on how to play.

 Deep-Q learned to play Atari games.            Image source: Pixabay.

By providing the input of the video pixels of the game including the score and allowing the system to experiment with random keystrokes, the system learned to play and win 29 of the 49 games, without ever being given the rules or the objective. DeepMind believes this will eventually lead to general learning systems, machines that can think -- the holy grail of AI. 

A team from Intel (INTC -2.40%) and TU Darmstadt used Grand Theft Auto to improve their systems' visual learning algorithms by automating the process of identifying images used to train the system. The results were astounding as the automation was 1,000 times faster. The team believes that its success will lead to additional breakthroughs in computer vision tasks. 

Teams at Microsoft (MSFT -1.27%) Research Lab created an AI training platform on the game Minecraft that allows its system to learn from and collaborate with human players, perform a range of simple to complex tasks, and understand complicated environments. They anticipate that this will accelerate deep learning by teaching the systems to understand and navigate three-dimensional spaces. 

The Foolish final word

Games provide researchers with an effective tool for training and evaluating their AI systems. As the complexity of the games they conquer increases, so does their ability to solve real-world problems.

Katja Hofmann of the Microsoft Research Lab had this to say: "I think what's underappreciated is the power of these platforms to really increase the speed of innovation, to allow us to very rapidly make progress in the underlying learning technology, and the fact that this will translate into better algorithms for real-world applications."