When Intel (NASDAQ:INTC) acquired artificial intelligence (AI) start-up Nervana Systems late last year, at first glance it appeared to be another big tech company jumping on the AI bandwagon. A closer look, however, reveals a very specific motivation. And it could spell trouble for NVIDIA.

Image of computer circuits flooded with green electrical current forming NVIDIA eye logo.

NVIDIA GPU is the go-to chip to train AI systems -- for now. Image source: NVIDIA.

Intel has seen the market for its CPU chips shrink along with the worldwide shipment of personal computers, which fell an estimated 6% in 2016, the fifth consecutive year of declines. Competitor NVIDIA (NASDAQ:NVDA), meanwhile, has seen the market explode for its graphic processing units (GPU), along with its stock price, which tripled in 2016.

Though once used solely to render graphics in video games, NVIDIA's GPUs have become the go-to chip for training neural networks due to the massive parallel processing capability. According to a Financial Times article in December, NVIDIA "is widely credited with having developed the best chips for training the artificial neural networks". The use of NVIDIA GPUs by data centers -- many training AI systems -- helped revenue nearly triple in the most recent quarter, to $240 million from just $82 million in the prior year. Intel dominates the non-AI data center market, selling nearly 99% of server chips used, and it now has its sights on the nascent AI market as well.

What is Intel doing?

Late last year, Intel announced that it would release a server chip specifically designed for AI applications -- the Xeon Phi, aka Knights Mill. It indicated this processor would run 2.3 times faster than NVIDIA's GPUs when used for deep learning. NVIDIA countered, saying Intel deliberately used outdated benchmarks and older versions of its products in the comparison. It claims that had Intel used the updated benchmark, NVIDIA's offering is 30% faster, and that its latest processor is 90% faster.

Rendering of artificial neural network surrounded by firing synapses.

Intel wants to "unleash the next wave" in artificial intelligence. Image source: Intel.

Intel's recent acquisition pushes it deeper into this market. Nervana Systems took a different approach in its AI research. It has created an application specific integrated circuit (ASIC) for its AI research called the Nervana Engine. Certain elements in a GPU needed for graphics processing are not necessary in an AI application. By stripping out unnecessary elements and reengineering the memory, Nervana claims it can achieve 10 times increases in the computing power currently available on GPUs. In an article from October 2015, Barron's [subscription required] explained:

Their chip has thousands of little logic engines sitting next to pools of memory, with a plethora of connections between them. They claim the chip can perform in a fraction of the time an image-recognition task that would take an Intel server chip 2,000 hours, or an Nvidia GPU, 33 hours. The key is that they are designing for specific algorithms, so the chip's design is more efficient than a general-purpose chip like Intel's or NVIDIA's. 

Graphic representation depicting Nervana Engine ASIC chip.

Nervana Engine ASIC chips are designed to accelerate AI system training. Image source: Nervana Systems.

Intel will be integrating the new technology into its existing chips in early 2017, so expect new offerings that will directly challenge NVIDIA's GPUs in the AI space. Intel also claims that by 2020, it will be able to train AI systems 100 times faster than with today's technology, though some have deemed those goals very aggressive.

It's not the only one

And Intel isn't the only one claiming improvements beyond what GPUs currently accomplish. Alphabet's (NASDAQ:GOOG) (NASDAQ:GOOGL) Google has been conducting research in the field of AI for over five years. It announced it has a Tensor Processing Unit -- which is also an ASIC -- capable of delivering performance that is 10 times faster than any GPU running today, and that it has been running TPUs in its data centers for more than a year.  They were also used in the AI system Alpha Go, which recently made headlines by defeating many top ranked Go players. This provides a degree of confirmation of Nervana's claims regarding the potential of such custom circuits.

NVIDIA sees AI as a billion-dollar opportunity, though data center revenue amounted to only $240 million in its most recent quarter.  Intel's revenue from data centers, meanwhile, amounted to $4.7 billion in its most recent quarter. This illustrates that while gains in the market would barely move the needle for Intel, they could be a big mover for NVIDIA. While GPUs currently hold a commanding lead in the training of AI systems, that could change quickly if the technological advances claimed by Intel came to fruition. 

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Danny Vena owns shares of Alphabet (A shares). Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares), short January 2018 $650 calls on Alphabet (C shares), and long January 2018 $25 calls on Intel. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.