Artificial intelligence (AI) has been a boon to a number of companies, but no company has benefited more from this nascent technology than NVIDIA Corporation (NASDAQ:NVDA), the industry leader in graphics processing.

The company's performance accelerated when researchers discovered that the same qualities that made GPUs the best choice for graphics -- the ability to perform massive amounts of mathematical calculations at a high rate of speed -- also made them the go-to choice for training AI systems.

NVIDIA's growth over the last several years has been nothing less than astonishing. In the most recent quarter, revenue grew 48% to $1.9 billion over the prior-year quarter and revenue from AI, which is housed in the company's datacenter segment, jumped 186% year over year. The stock has more than tripled in the last year, increased 500% over two years, and is up a staggering 1,000% in five years. Investors are rightly wondering whether this performance can continue, and several recent developments make its outlook a bit less certain.

Graphics chip laying on its side.

NVIDIA DGX-1 AI Supercomputer-in-a-box. IMAGE SOURCE: NVIDIA.

We want a piece of that

Success has a tendency to attract competition, and that is certainly true in the case of NVIDIA. Google, a division of Alphabet Inc. (NASDAQ:GOOGL) (NASDAQ:GOOG)recently revealed its second-generation Tensor Processing Unit (TPU) that could challenge NVIDIA in the space. AI occurs in two distinct stages: the training of the system, then the running of the system -- known as inferencing -- after it has been trained. Where this latest version of the TPU differs from the first is its ability to address both stages of an AI system. NVIDIA has had a near-monopoly in the space regarding training AI systems, as there was no better alternative. The new iteration of the TPU could change that.

Google also announced success in stringing together 64 of its TPUs into a pod that could produce unrivaled computational capability, exceeding that of the GPU. It is deploying these across its Google Compute Engine and will give researchers and academia access to a cluster of 1,000 cloud TPUs for free, on the condition that they share their research in peer-reviewed publications. Google has not made any move to make its TPU available for sale, so they can only be accessed via the company's cloud offerings. Researchers may not want to get locked into Google's ecosystem in the early stages of AI development, lest something better comes along.

Google's Tensor Processing Unit Chip

Google's TPU may be the future of AI processors. Image source: Google.

Intel outside

Intel Corporation (NASDAQ:INTC) had been largely shut out of the explosion of AI-induced growth, as the number-crunching capabilities of the GPU vastly exceeded that of the CPU. Don't shed too many tears for Intel, though. With an estimated 99% market share of the server market, more servers means Intel is inside. Intel is not intent to concede all this growth to NVIDIA.

The company had already been developing CPUs customized for AI applications, but late last year, it acquired deep-learning start-up Nervana. The company had developed what it called the Nervana Engine, an application-specific integrated circuit (ASIC) that removed the elements of a GPU specific to graphics processing and reengineered the memory, creating a chip that increased the computing capability by tenfold. Intel is hard at work integrating this technology into its Knights Mill Xeon Phi processor, which is customized for deep learning. Thes chips are scheduled to ship mid-year. 

Intel's Altera Stratix 10 FPGA processor.

FPGA's may outdo GPU's for AI. Image source: Intel.

Another emerging threat

Google's TPU isn't the only potential competition to NVIDIA's dominant GPU. Several companies including Intel, through its acquisition of Altera, and Xilinx (NASDAQ:XLNX) have been experimenting with the use of the field-programmable gate array (FPGA), a processor that can be reconfigured for specific functions after manufacturing. FPGAs have been notoriously challenging to program, requiring both hardware and software expertise. Xilinx has worked to make the programming less onerous by releasing tools to ease the hurdles in implementing FPGAs.

Another advantage of the FPGA lies in its power efficiency. The high energy consumption of GPUs makes them more expensive to run. When used at the scale of a cloud provider, the energy efficiency could be a key differentiator. Microsoft Corporation (NASDAQ:MSFT) has already made a big bet in FPGAs; it already outfitted its entire Azure cloud network with the specialty chips and is running its deep-learning neural networks with them.

Expect more developments

NVIDIA GPU competitor Advanced Micro Devices, Inc. (NASDAQ:AMD) is also scheduled to release its Radeon Instinct line of chips specifically designed for AI applications, which it claims will best NVIDIA's latest and greatest. It remains to be seen if its new line is superior, or if this is merely posturing by the smaller opponent.

At this point, there is no one chip perfect for every AI application; each possesses its own strengths and weaknesses. NVIDIA's advantage may hold as the best chip for training AI systems, but it could just as easily give way in the fast-paced and ever-changing field of AI. There isn't any need to sound the alarm bell yet, but NVIDIA's continued growth isn't guaranteed.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Danny Vena owns shares of Alphabet (A shares). Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares), short January 2018 $650 calls on Alphabet (C shares), and long January 2018 $25 calls on Intel. The Motley Fool owns shares of and recommends Alphabet (A and C shares) and Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.