I have seen it rumored all over the Web that TSMC's (TSM -0.43%) 20-nanometer manufacturing technology will not be suitable for high power graphics processors. Tech news site Fudzilla even claims that the 20-nanometer node is "broken for GPUs," claiming that neither Advanced Micro Devices (AMD 2.44%) nor NVIDIA (NVDA 1.67%) – the two major players in the high-end GPU market -- have "decided that [the 20-nanometer process] is simply not viable for GPUs."

This, in my view, doesn't make a lot of technical sense, particularly as Fudzilla and others are claiming that NVIDIA and AMD will skip straight to 14/16-nanometer FinFET-based graphics processors. Let's dig into this more deeply.

Can't make large, power-hungry chips, eh?
The first explanation that I've heard is that the 20-nanometer node is only good for small, low-power system-on-chip products. However, there are a number of counterexamples out in the wild today that disprove that notion.

Oracle (ORCL 0.72%), for example, announced a while back that it would be bringing to market its next generation SPARC processor known as the SPARC M7. This chip is built on TSMC's 20-nanometer manufacturing process, is aimed at high performance server workloads, and features approximately 10-billion transistors – well north of even the biggest GPUs available today.

So, we know that it's quite possible to design and build large, performance-oriented processors on TSMC's 20-nanometer process. That said, the Fudzilla piece did mention "low yields," something also worth exploring.

The interesting thing about the yield question
Two large drivers of chip cost are wafer cost and yields. The wafer cost is how much a fabless chip company needs to pay its manufacturing partner for a wafer of chips; yields determine how many good chips a wafer buyer gets per wafer. The higher the yields, the more good chips per wafer, which should translate into lower per-chip costs for the buyer.

If 20-nanometer yields aren't great, then this could mean significantly higher cost-per-transistor for a 20-nanometer GPU versus a 28-nanometer GPU. While the 20-nanometer process also reportedly brings a transistor performance improvement, the improvement may not be compelling enough for the graphics chip vendors to take the cost-per-transistor hit.

The 16-nanometer node, according to expert Handel Jones, is likely to have a higher per-transistor cost relative to the 20-nanometer node (normalized for yields) due to the additional complexity of the FinFET transistors. However, if AMD/NVIDIA stick to 28-nanometer during 2015 and then only migrate to 16-nanometer in 2016, then the improved yields coupled with a dramatic boost in transistor performance (thanks to the FinFET transistor structure) could be enough to justify the higher costs of the 16-nanometer node. Perhaps for the 20-nanometer node the performance/cost trade-off didn't make sense. 

Will we see 20-nanometer GPUs?
There is no official information out there today pertaining to the manufacturing technologies that AMD/NVIDIA plan to implement in future discrete graphics chips, so it's anybody's guess as to whether the major GPU vendors will ultimately move to 20-nanometers. However, if the GPU vendors skip 20-nanometer to go to 16-nanometer, I believe that the decision will be due to one of economics (performance-per-dollar) rather than any technical limitation on the part of TSMC's 20-nanometer process.