There's no question that NVIDIA Corporation (NASDAQ:NVDA) has been benefiting from its early move into artificial intelligence (AI). In the company's most recent financial release, revenue from its data center, the segment containing sales from AI, skyrocketed 186% over the prior-year quarter. AI now accounts for more than 21% of NVIDIA's quarterly revenue of nearly $2 billion, up from just 6% two years ago. The company's stock price has followed a similar trajectory, with gains of nearly 1,000% over the past five years. 

Those gains have been driven by NVIDIA's graphics processing units (GPUs), which were the top choice for training AI systems. Alphabet (NASDAQ:GOOGL) (NASDAQ:GOOG) division Google has been at the forefront of AI development with its Google Brain and, later, with its acquisition of Deep Mind, both specializing in the area of deep-learning neural networks. It's also been a big user of NVIDIA's GPUs. Now recent developments at Google may be about to change the status quo and put NVIDIA's near monopoly in training AI systems and its future growth in jeopardy.

A Tensor Processing Unit chip.

Google's TPU may be a rival to NVIDIA's GPU. Image source: Google.

Taking the fight to NVIDIA

Last week at Google's 2017 I/O Developers Conference, the company unveiled the newest version of its Tensor processing unit (TPU), the chip it developed in-house for its AI systems. The bombshell, however, was the revelation that the new version of the TPU could handle both training and inference -- the previous version could only handle inference. What does this mean, and what does any of it have to do with NVIDIA and the GPU?

A little background

Unless you work in the field, you probably don't know that AI happens in two distinct phases. The first is the training of AI systems, which involves creating the algorithms and building the software models, called neural networks, and training them to perform a specific task such as image recognition or language processing. This training phase is computational and mathematically intensive. 

Once these systems are trained, they then go about the task for which they were designed, sifting through massive amounts of data and using their unique ability to recognize patterns, to perform these data-intensive tasks with speed and precision. The execution of these tasks is called inference, as the system infers things from the data it's processing based on its training. 

Until now, GPUs were the best option for training AI systems. These chips have the ability to perform a massive number of mathematical calculations in parallel or simultaneously, which is what makes them so perfect for rendering graphics. It's also what made them the ideal choice for training AI. The rapid and enormous parallel processing provided by the humble GPU had no equal.

There's more

Google also announced that it developed a system to string together 64 TPUs on a server called a "TPU pod," which will produce unrivaled computational ability. Fei-Fei Li, Google's chief scientist of AI and machine learning and the director of Stanford's AI Lab, indicates that the new supercomputer will be "delivering a staggering 180 teraflops of computing power and are built for just the kind of number crunching that drives machine learning today." 

A TPU pod.

A "TPU pod," built with 64 second-generation TPUs delivers up to 11.5 petaflops of machine-learning acceleration. Image source: Google.

In a blog, Jeff Dean, a senior fellow on the Google Brain team, wrote: "Using these TPU pods, we've already seen dramatic improvements in training times. One of our new large-scale translation models used to take a full day to train on 32 of the best commercially available GPUs -- now it trains to the same accuracy in an afternoon using just one-eighth of a TPU pod." 

When Google introduced its first TPU at the I/O Developers Conference in May 2016, CEO Sundar Pichai said, "TPUs are an order of magnitude higher performance per watt than commercial FPGAs [field-programmable gate arrays] and GPUs." The new chip was more energy efficient and specifically designed to integrate with TensorFlow, Google's software library for training AI systems. This optimized hardware and software combination had been employed in-house at Google for more than a year. While GPUs were still the chip of choice for training, Google's new TPU had an edge in inference, the work performed once the system was trained.

Going forward

NVIDIA isn't ceding its lead in the field without a fight. It recently introduced its own tensor technology, along with other advancements in its GPU architecture. It's been the industry standard for some time, and that isn't likely to change overnight. Still, investors should be aware that the field of AI is still in its infancy and the technology is changing almost daily. NVIDIA is still the biggest player in town when it comes to training AI systems, but Google has thrown down the gauntlet and putting the company on notice that it isn't the only game in town. 

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Danny Vena owns shares of Alphabet (A shares). Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares) and short January 2018 $650 calls on Alphabet (C shares). The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Nvidia. The Motley Fool has a disclosure policy.