Chip giant Intel's (INTC -1.60%) 10nm chip manufacturing technology, which is expected to offer a large leap in power efficiency and more than halve chip area from the company's current 14nm technology, is late. Really late.

Chip manufacturing technologies are often labeled with numbers in nanometers (nm), which don't correspond to any physical characteristic or dimension of the technology in question. Instead, smaller numbers usually indicate area reductions and performance enhancements over technologies labeled with larger numbers. So 10 is an improvement over 14.

Products built using Intel's 10nm technology were originally supposed to ship in high volumes during 2016, but that schedule was pushed out into the second half of 2017. Then production was delayed again and now, according to Intel's own statements, it doesn't expect to ship its first products built using its 10nm technology in significant quantities until the second half of 2018.

Though it's clear that the 10nm production delays are due, at least in part, to the company's need for more time to perfect the technology, I think there's another factor at play. 

A wafer of Intel chips with finished chips in front of it.

Image source: Intel.

Semiconductor economics 101

The key reason chip manufacturers like to build new technologies that dramatically reduce chip area is economic. Migrating to denser technologies means the cost per millimeter of chip area goes up, since the number of manufacturing steps as well as the complexity of each manufacturing step goes up. However, the increased density is supposed to more than offset the wafer cost increase, making the effective cost per transistor cheaper. Chips are made up of hundreds of millions, if not billions, of transistors.

Let's put some concrete numbers to this. Suppose that we want to build a chip called Chip A using the coarser 14nm technology. This chip's physical footprint measures in at 100 square millimeters, so about 600 of them can be produced on a typical 14nm wafer. If all these chips work -- that never happens in the real world, but bear with me -- the cost per chip is $8.33.

Now, suppose we want to build a chip with identical functionality to Chip A, but instead of building it using the coarser 14nm technology, we use the denser 10nm technology. Let's refer to this 10nm version of Chip A as Chip B. Also assume that a 10nm wafer is 30% more expensive to produce than a 14nm wafer, putting its manufacturing cost at $6,500.

Since the cost of a 10nm wafer is much higher than the cost of a 14nm wafer, it's clear that if we were to build two chips of the same size on each of these technologies, the one built using the 14nm technology would be cheaper.

But there's a catch: A chip with identical functionality built in the 10nm technology to the 100-square millimeter chip built using 14nm wouldn't measure 100 square millimeters -- it'd be about 46 square millimeters. Intel claims 0.46X chip area scaling in going from its 14nm technology to its 10nm technology. 

The area reduction is large enough that we'd produce so many more of Chip B than of Chip A that the effective manufacturing cost of Chip B would be lower than that of Chip A. 

How much cheaper would Chip B be to make? Well, using the 10nm technology, a whopping 1,325 units of Chip B could be produced. The effective cost per chip, then, would be just $4.90 -- even with the higher wafer cost.

Understanding the trap

In this analysis, I made one big assumption: that all the chips produced on the 14nm and 10nm wafers worked. Or, put another way, the yield rates -- which refers to the percentage of chips produced that are usable -- were 100% on both technologies.

The cost advantages of a new manufacturing technology quickly evaporate if the new technology yields significantly worse than the older technology. In this case, Intel's 10nm technology needs to go up against an Intel 14nm technology that has been in mass production since 2014 and has been upgraded and tweaked in the three years since.

So now the trap that Intel has gotten itself into is clear: Given the maturity of 14nm and the relative immaturity of 10nm, it's more cost-effective for Intel to simply build more complex products on its 14nm technology than to transition its major product lines to 10nm technology.

If Intel's customers find roughly similar value in Intel's improved 14nm products compared to hypothetical, similarly positioned 10nm products, then it makes more financial sense for Intel to keep most of its shipment volumes on 14nm until 10nm can be produced at high enough yield rates to make the transition worthwhile.