During the last decade or so, graphics specialist NVIDIA (NASDAQ:NVDA) has worked to find new use cases for its graphics processors beyond the large (and lucrative) market for PC gaming. Not all of the adjacent markets that the company has pursued ultimately bore fruit (see: smartphones and tablets), but several have. One really important one that's starting to get incredibly exciting for the company is the market for high-performance accelerators targeted at data centers.
Historically, NVIDIA has addressed this market by taking graphics processors that were also designed for gaming, professional visualization, and other markets, and -- with a lot of software and ecosystem work on top -- repurposing them for data centers. However, with the company's recently announced Tesla P100 accelerator, which is based on a chip that the company refers to as GP100, it's clear that the data center has become a first-class citizen in terms of the company's chip development. Allow me to explain.
GP100 is a fairly radical departure from GP102
NVIDIA's top graphics processor for data-center applications is the GP100 that I mentioned above. Historically, the very highest end in a given NVIDIA graphics processor lineup would be very large, very expensive to manufacture, and come packed with both a lot of single-precision computing capability, which is useful for games, as well as double-precision computing capability -- which is not useful for games, but helpful for many tasks that supercomputers are called upon to perform.
With Pascal, NVIDIA departed from this strategy and developed two separate "high end" Pascal chips. GP100 is the company's most-complex graphics-processing unit, with 3840 single-precision CUDA cores, 1920 double-precision CUDA cores, HBM2 (3D-stacked) memory, and a streaming multiprocessor configuration that's actually changed in non-trivial ways from what's in the GP102/GP104/GP106 chips. The die size is gargantuan at 601 square millimeters.
GP102 is the company's most-powerful gaming-oriented chip, but it uses less advanced GDDR5X memory -- likely for cost reasons -- and has minimal double-precision support. The silicon die itself is also smaller at 471 square millimeters, which also helps to keep costs down relative to GP102.
NVIDIA's data-center biz is getting pretty large
In the past, NVIDIA's data-center business simply wasn't large enough to justify the development of an entirely separate, highly complex chip targeted only at data center/high-performance computing applications. That, however, is changing.
In the company's most-recent fiscal year, its data-center-oriented sales came in at $320 million. This is quite small relative to the company's $2.818 billion gaming business, or even its $750 million professional visualization business; but it's a business that has been growing very rapidly.
To put the $320-million figure into perspective, this business only generated $56 million in the company's fiscal year 2013. The three-year revenue compounded-annual-growth rate for the business, from fiscal year 2013 to fiscal year 2016, as NVIDIA pointed out in an analyst day presentation, averaged about 80%.
NVIDIA's public commentary has been quite bullish on the opportunity here, and it would seem that the company is willing to invest now in order to reap the potentially large revenue growth in the future. Given that the company is in a strong financial position, and has the wherewithal to invest ahead of what could one day be in excess of a billion dollars in high-margin revenue, this seems like the correct course of action.