Editor's Note: This is the second of a three-part series on the colossal changes taking place in the IT world.

NVIDIA (NVDA 0.76%) has become one of the hottest companies in the technology industry. Its graphics processing units (GPUs) made computing more efficient, causing its stock to skyrocket more than 500% during the past three years. 

I have long been a fan of NVIDIA. I visited its corporate headquarters in 2015 and was fascinated to find out about deep learning. I've been a shareholder for years and have made quite a bit of money.

But NVIDIA has come under fire in recent months, and analysts are divided about where its growth will come from. The company has already dominated consumer gaming, so it now looks to data centers as a key market to expand its top line. Bulls will point to NVIDIA's 58% revenue growth in data centers as a sign that it's showing solid progress.

Check out the latest NVIDIA earnings call transcript. 

I remain much more skeptical. I do agree that data centers really matter, especially those of the largest cloud vendors like Amazon.com (AMZN -1.14%), with its Amazon Web Services (AWS) business, or Alphabet (GOOG 0.37%) (GOOGL 0.35%) with the Google Cloud Platform.

But it's becoming increasingly clear to me that GPUs are not the right fit for data centers. I believe NVIDIA's future will look much different than its past, and that's not good news for investors.

An image of a corporate datacenter

Is there still room for NVIDIA GPUs in data centers like this one? Image source: Getty Images.

Looking to the clouds

To set the scene, the huge data centers of large cloud computing vendors are truly the holy grail for hardware providers. There are scores of locations all across the world, each managing vast amounts of data to be analyzed. A win here for NVIDIA could lead to significant future GPU sales.

But the cloud titans have unique needs for their workflow. And those needs often don't really align with the benefits of GPUs.

For example, Amazon is designing its own chips to make its Echo devices more responsive to vocal cues. For this application, latency (the speed it takes Alexa to understand and respond) is very important. Google is designing its own chips, too, to reduce its data center power consumption. Here, the number of operations performed per watt of power consumed is very important. Neither of these applications requires image or video recognition.

The underlying code of software applications is also constantly changing, and the machine-learning algorithms are continually being retrained. Alexa might originally be taught English according to the Oxford English Dictionary, but eventually retrained to recognize that "killing that interview" is a good thing and not an act of homicide.

Now, multiply each software's specific needs and constantly changing code by hundreds of thousands of other customers, who are each renting storage, processing, and "machine learning as a service" from cloud providers like Amazon Web Services and the Google Cloud Platform. Things quickly start to get very complex.

To its credit, NVIDIA has done what it can to find solutions to its customers' problems. It has employed optimizers to match the logic of what its customers are trying to accomplish to its GPU hardware products that would most closely be able to achieve them. TensorRT is one example, which would configure NVIDIA's GPUs to run certain data center apps.

But the fit is never perfect.

Companies have had to accept that their software is to some extent being force-fit to NVIDIA's available hardware models and their associated capabilities. The end users didn't know what architecture was actually being abstracted behind the scenes by NVIDIA's optimizers. They just knew that GPUs were significantly better at handling their needs than CPUs were. Which is exactly why we've seen the outsized growth of GPUs in recent years.

Herein lies the problem for NVIDIA. GPUs aren't the magic solution that can handle all of the data center complexity. There are inefficiencies created with every force-fitting of an application to a GPU. And things are getting more complex every day.

Large companies with deep pockets are already designing their own application-specific integrated circuits to optimize individual tasks. But even for those without billions of dollars and dedicated research teams, a solution is beginning to arise.

What if there were chips that could be programmed and then reprogrammed, to always perfectly match whatever you want your software to do?

An artist's rendering of an integrated circuit.

Image Source: Getty Images

Enter the FPGA

These chips do exist, and they're called field programmable gate arrays (FPGAs). Field programmable chips can have their logic continually changed, meaning they're adaptable to changing software requirements.

That sets them apart from other instruction-based chips like CPUs or GPUs, but the distinction hasn't really mattered historically. Computing was traditionally just done using CPUs, which were doubling in performance efficiency every 18 months anyway. And GPUs then found a more efficient way to improve upon CPUs.

But that point of differentiation really matters today. Artificial intelligence (AI) is a fickle beast, and its constant training of AI algorithms is making it hard for CPUs and GPUs to keep up. And just like for Alexa, latency is becoming increasingly important for any applications requiring split-second response times. The inefficiencies are becoming less tolerable.

The programmable aspect of FPGAs could completely eliminate those inefficiencies. By abstracting the layer between software models and hardware -- a "library of model optimizers" to the techies -- an ecosystem using FPGAs could theoretically run every application perfectly. FPGA as a Service could be like CUDA on steroids: fine-tuned to match the hardware to specific algorithms, rather than using whichever of NVIDIA's GPUs would be the closest match. And FPGAs can be optimized and then reoptimized over time, which is convenient for when the code and the logic changes. 

FPGAs won't be the answer for everyone. Their up-front cost is greater than CPUs and GPUs. And they take a lot of time to program, which needs to be done by highly experienced engineers.

But those factors aren't as concerning for the cloud vendors. They have access to IT talent and can afford the premium up-front cost. The benefit they derive is lower overall power costs for their data centers, which they can pass along as more-competitive rates to their customers renting the processing and storage.

This is the right market, and now is the right time, for FPGAs. And it's exactly why the largest cloud service providers, including AWS, Microsoft, Alibaba, and Baidu, are deploying them at a rapid pace globally. I believe NVIDIA's growth rates in data centers will slow, and that its greatest days may now be behind it. 

What's next?: In part three, I reveal one company that is much better-positioned to take advantage of the colossal changes taking place in the IT world.