Image credit: NVIDIA.

NVIDIA's recently introduced GeForce GTX 1080 packs a chip known as the GP104-400. At 314 square millimeters, it is probably the largest chip built on TSMC's (NYSE:TSM) 16-nanometer FinFET process that consumers will be able to buy anytime soon. By contrast, the largest mobile applications processor built on the technology -- the Apple A9X that powers both iPad Pro models -- are a mere 147 square millimeters.

In this article, I would like to explore the economics of the GP104 GPU that powers the GeForce GTX 1080, particularly relative to the GM200 chip, a 601 square millimeter behemoth manufactured using TSMC's 28-nanometer process.

## Coming up with relative cost estimates

Analyst Handel Jones estimates that the cost of a 16-nanometer wafer by the end of 2016 to a fabless customer like NVIDIA should come in at around \$7,779.22. In contrast, a 28-nanometer wafer by the end of 2014 (by which time the process is very mature) should run a fabless customer approximately \$4577.25.

By using Silicon Edge's dies per wafer estimator tool, a wafer of GM200 chips should pack a total of 91 dies. A wafer of GP104 chips, on the other hand, should be able to cram in approximately 180 dies. This passes a basic sanity check because the GP104 is about half the size of the GM200.

Now, if we assume that the 28-nanometer process, by virtue of its maturity, has a very low defect density of 0.01 defects per square centimeter, then -- using iSine's die yield calculator tool -- NVIDIA should get around 76 good chips per 28-nanometer wafer.

Dividing the wafer cost by the number of good dies yields a cost of \$58.68 per chip.

Doing the same calculation for the 16-nanometer GP104, but assuming a defect density of 0.015 defects per square centimeter (since it is a less mature process), yields around 164 good dies per wafer. Dividing the estimated 16-nanometer wafer costs by this figure leads to a die cost estimate of \$47.43.

I should note that these estimates involve a lot of guesswork and assumptions, and are done in an attempt to provide a relative cost comparison between the two chips under certain assumptions (that I believe are realistic).

## NVIDIA financial implications

Based on the analysis above, it looks as though the GPU that powers the GTX 1080 may actually be cheaper to manufacture than the GM200 that powered the GTX Titan X and GTX 980 Ti flagship cards of yesteryear.

In this case, it's little wonder that NVIDIA has chosen to essentially "end of life" the GTX 980 Ti in favor of the GTX 1080. The 1080 is faster, more efficient, and likely cheaper to build. It's a no-brainer for NVIDIA.

At the end of the day, it looks as though the GeForce GTX 1080 won't have a negative impact on the company's gross profit margins. Of course, if the 16-nanometer process winds up performing significantly worse from a defect density perspective than the 28-nanometer process than I assumed in my calculations, the analysis could change.

However, at this point, I feel comfortable in assuming that as NVIDIA transitions its product lineup from 28-nanometer parts to 16-nanometer parts, there should be a minimal margin impact. By far the biggest driver of NVIDIA's gross profit margins, be they positive or negative, will likely be the competitiveness of its products in the marketplace.