NVIDIA (NVDA 1.96%) shares more than tripled over the past 12 months, and they could still have room to run. I highlighted three key catalysts in a recent article: the company's new low-end GPUs, its next-generation gaming and data center GPUs, and rising sales of its Tegra CPUs in the automotive market.
But there's another big reason to love NVIDIA stock: the power efficiency of its chips. Let's discuss why power efficiency matters, and how NVIDIA maintains its edge against rivals like AMD (AMD 3.73%) in this key department.
Why does power efficiency matter?
As computers become more powerful and the amount of data streamed from the cloud surges, power consumption rises. Energy costs climb, and data centers must invest in more real estate and stronger cooling systems for their servers.
For mainstream consumers, more power-efficient chips can be used to produce smaller, cooler, and quieter PCs. Power-efficient chips also enable automakers to build beefier infotainment and navigation systems, add new computer vision and machine learning features to their vehicles, and use automotive "supercomputers" -- like NVIDIA's Drive PX 2 -- to guide driverless cars.
Why does NVIDIA have an edge in power efficiency?
NVIDIA has beaten AMD in terms of power efficiency over the past few years. Analysts generally benchmark power efficiency by measuring the performance per watt (PPW) of their chips.
Looking at the low end of the market, benchmarks at GPUBoss show that NVIDIA's GTX 1050 has a PPW score of 7.9 (out of 10), while AMD's comparable RX 460 has a PPW score of 7.5. Another review at Extremetech, which compared NVIDIA's mid-range GTX 1060 to AMD's RX 580, revealed a stunning difference in power consumption -- the GTX 1060 used 64% of the electricity per frame of animation as the RX 580.
RBC Capital analyst Mitch Steves also recently tested the GTX 1070 against the RX 580 in a cryptocurrency mining match. Steves claims that while the RX 580 offers 3% better mining performance, using the GTX 1070 consumed 33% less power -- making it a more cost-efficient card for long-term mining.
Starting with the Fermi generation of chips in 2010, NVIDIA has focused on improving PPW instead of absolute performance, with the goal of eliminating power consumption bottlenecks of HPC (high-performance computing) systems.
Fermi was replaced by Kepler in 2012, which was succeeded by Maxwell in 2014. NVIDIA delivered a huge improvement in PPW with Maxwell, through the use of a larger L2 cache, improved memory efficiency, a new Streaming Multiprocessor (SM) configuration, and tile-based rasterization -- which splits graphics into smaller "tiles" instead of rendering a whole scene at once. Many of those improvements can be found in its current-gen Pascal cards.
Why NVIDIA's rivals should be worried
NVIDIA's superiority in power efficiency puts it well ahead of the tech curve across multiple markets. This can be seen in Max-Q, NVIDIA's new design approach for creating quieter, thinner, and faster gaming laptops. The design uses precision-engineered GPU and optimized components to create laptops as thin as 18mm that offer up to 70% more gaming performance than comparable devices. If these designs work as advertised, AMD's mobile Radeon GPUs could look a lot less appealing.
NVIDIA also recently offered the world a glimpse at the future of data centers with the DGX-1 supercomputer, which runs on its next-gen Volta chipset and puts "400 servers" into a single box. The $149,000 box is powered by eight Tesla V100 GPUs and two 20-core Intel (INTC 1.11%) Xeon E5-2698 processors. These servers would be optimized for machine learning -- a next-gen task in which GPUs generally outperform CPUs.
In the past, Bloomberg shifted one bond pricing application running on 2,000 CPUs to a 49 GPU rack of NVIDIA Tesla GPUs. The CPU-based system cost $4 million and $1.2 million in annual energy bills, while the GPU-based one cost less than $150,000 and reduced energy bills to $30,000. On its website, NVIDIA cites similar case studies with Hess and Procter & Gamble, which both resulted in similar savings in hardware costs and energy bills.
If the DGX-1 offers even better savings, Intel -- which has a near monopoly on data center CPUs -- would likely sell fewer Xeon chips. That's why Intel plans to counter NVIDIA with its next-gen Knights Bridge Xeon chips, which it claims will deliver robust machine learning performance on its own without the aid of GPUs.
The key takeaway
Looking ahead, NVIDIA's leading position in power efficient GPUs should give it an edge against AMD in multiple markets. Gamers will get cooler, quieter cards and thinner gaming laptops, while data centers will require less real estate and consume less power. This makes NVIDIA a great long-term play on the growing importance of GPUs and the waning relevance of Intel's x86 CPUs. If NVIDIA maintains that edge, companies like AMD and Intel could struggle to keep up.