Alphabet's (NASDAQ:GOOGL) (NASDAQ:GOOG) Google announced at its I/O Developers Conference in May 2016 that it had designed a new chip, called the tensor processing unit (TPU), specifically designed for the demands of training artificial intelligence (AI) systems. The company didn't divulge much at the time, but in a blog post that same week, hardware engineer Norm Jouppi revealed that Google had been running the TPU in the company's data centers for more than a year and...

... found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore's Law).

The chip was an application-specific integrated circuit (ASIC), a microchip designed for a specific application. Little else was known about the enigmatic TPU, and the mystery continued until last week, when Google pulled back the curtain to reveal the inner workings of this new groundbreaking advancement for AI.

Tensor processing unit chip.

Google's tensor processing unit could revolutionize AI processing. Image source: Google.

Speed and efficiency

The TPU underlies TensorFlow, Google's open-source machine learning framework, a collection of algorithms that power the company's deep neural networks. These AI systems are capable of teaching themselves by processing large amounts of data. Google tailored the TPU to meet the unique demands of training its AI systems, which had previously been run primarily on graphics processing units (GPUs) manufactured by NVIDIA Corporation (NASDAQ:NVDA). While the company currently runs the TPU and GPU side by side (for now), this could have drastic implications for how AI systems are trained going forward.

Google released a study -- authored by more than 70 contributors -- that provided a detailed analysis of the TPU. In a blog post earlier this month, Jouppi laid out the capabilities of the chip. He described how it processed AI production workloads 15 to 30 times faster than CPUs and GPUs performing the same task, and achieved a 30 to 80 times improvement in energy efficiency. 

Saved the cost of 12 new data centers 

Google realized several years ago that if customers were to use Google voice search for just three minutes each day, that would require the company to double its existing number of data centers. The company also credits the TPU with providing faster response times for search, acting as the linchpin for improvements in Google Translate, and was a key factor in its AI system's defeat of a world champion in the ancient Chinese game of Go.

Companies are taking a variety of approaches to bring improvements to AI systems. Intel Corporation's (NASDAQ:INTC) recently acquired start-up Nervana has developed its own ASIC, the Nervana Engine, that eliminates components from the GPU not essential to the functions necessary for AI. The company also re-engineered the memory and believed it could realize 10 times the processing currently performed by GPUs. Intel is working to integrate this capability on its existing processor platforms to better compete with NVIDIA's offering.

Not the only game in town

A field-programmable gate array (FPGA) processor can be reprogrammed after installation and is another chip being leveraged for gains in AI. FPGAs have increasingly been used in data centers to accelerate machine learning. Apple Inc. (NASDAQ:AAPL) is widely believed to have installed this chip in its iPhone 7 to promote sophisticated AI advances locally on each phone. The company has emphasized not sacrificing user privacy to make advances in AI, so this would be a logical move for its smartphones.

Tesla P100 chip.

NVIDIA Tesla P100 powers Facebook's AI server. Image source: NVIDIA.

Facebook, Inc. (NASDAQ:FB) has taken a different approach in optimizing its recently released data center server named Big Basin. The company created a platform that utilizes eight NVIDIA Tesla P100 GPU accelerators attached with NVLink connectors designed to reduce bottlenecks, in what it described as "the most advanced data center GPU ever built." The company revealed that this latest server is capable of training 30% larger machine learning data systems in about half the time. Facebook also indicated that the architecture was based NVIDIA's DGX-1 "AI supercomputer in a box." 

Much more innovation to come

Though we have been hearing about almost daily breakthroughs in AI, it is important to remember that the science is still in its infancy and new developments will likely continue at a rapid pace. These advances provide for more efficient systems and lay the foundation for future progress in the field. These necessary advances will propel future innovation, but are difficult to quantify in terms of dollars and cents, as well as the potential effects on future revenue and profitability.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Danny Vena owns shares of Alphabet (A shares), Apple, and Facebook. Danny Vena has the following options: long January 2018 $85 calls on Apple, short January 2018 $90 calls on Apple, long January 2018 $640 calls on Alphabet (C shares), short January 2018 $650 calls on Alphabet (C shares), and long January 2018 $25 calls on Intel. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Apple, Facebook, and Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.