Later this year, graphics giant NVIDIA (NVDA 3.72%) is expected to introduce the first graphics processors based on a new architecture known as Turing. NVIDIA's currently available graphics processors are based on an architecture called Pascal, which first launched in May 2016.

NVIDIA's high-end Pascal products (GeForce GTX 1060, GTX 1070/1070 Ti, GTX 1080/1080 Ti, and Titan Xp) are manufactured using Taiwan Semiconductor Manufacturing Company's (TSM 2.83%) 16-nanometer chip technology. The low-end Pascal products (GeForce GTX 1050 Ti and lower) are built using Samsung's 14-nanometer technology.

An NVIDIA Volta graphics processor.

Image source: NVIDIA.

A question that's worth asking, then, is this: What manufacturing technology will NVIDIA use for its Turing line of graphics processors?

Let's go over the two possibilities.

Option No. 1: 12FFN

Last year, NVIDIA announced a new graphics architecture called Volta. NVIDIA has yet to bring Volta to the consumer market (and, frankly, isn't likely to at this point), instead reserving it for its data center and high-end professional visualization products.

The Volta products are built using TSMC's 12FFN manufacturing technology. TSMC's 12FF technology is an enhanced version of its 16-nanometer, or 16FF, technology, with 12FFN being a variant of 12FF customized specifically for NVIDIA. 

Given the maturity, performance, and cost-effectiveness of NVIDIA's 16-nanometer/12-nanometer technology, I wouldn't be surprised to see NVIDIA use it to build the Turing line of processors. 12FFN wouldn't deliver a generational leap over 16FF, but it'd be a nice improvement that would amplify any of the design/architecture advances that NVIDIA brings to the table with Turing.

Option No. 2: 10FF

Another option that NVIDIA could go with is TSMC's 10-nanometer, or 10FF, technology. 10FF offers a substantial improvement in chip area compared to 16FF (a chip in 10FF occupies about half the area as a comparable chip built in 16FF). This would allow NVIDIA to cram significantly more graphics cores into a given area compared to what it could do using 12FFN, leading to potentially much greater performance.

Moreover, 10FF should be reasonably mature at this point, given that TSMC has been using it to manufacture Apple's A11 Bionic and A10X Fusion processors for about a year now.

However, I think it's far less likely that NVIDIA will use 10FF for Turing than 12FFN for a simple reason: TSMC's commentary on its January earnings call.

TSMC said that 10-nanometer revenue made up 10% of the company's overall revenue during 2017 and that it expects year-over-year growth in 10-nanometer revenue during 2018 "driven by application processor, cellular baseband and ASICs CPU."

TSMC didn't mention graphics processors at all and, quite frankly, if NVIDIA were planning to introduce a new graphics architecture on 10FF, TSMC would probably be expecting a sizable boost in revenue from such orders.

Moreover, on that same earnings call, TSMC said that its "key technologies for high-performance computing are 16- and 12-nanometer and 7-nanometer." Since TSMC considers graphics processors to be high-performance computing technologies, the options for Turing likely come down to 16-nanometer, 12-nanometer, and 7-nanometer technology.

As 16-nanometer technology is old news at this point and 7-nanometer tech is probably a ways off from being able to cost-effectively mass produce large consumer graphics processors, some derivative of 12-nanometer -- 12FFN, in this case -- seems to be the most likely bet.