Nvidia (NVDA 3.96%) exercises a tremendous grip over the global artificial intelligence (AI) semiconductor market, and it managed to attain this status over the years by keeping competitors at bay. The company's technological advantage kept it as the go-to provider of AI chips for companies and governments around the globe.
The company reportedly controls 85% to 90% of the global AI chip market. Of course, there's new competition emerging in the form of custom AI processors manufactured by Broadcom, while Advanced Micro Devices is also taking steps to ensure that it can make a bigger dent in this market. However, the efforts that Nvidia's competitors are undertaking may not eventually translate into notable progress for them because of one very simple reason.
Image source: Getty Images.
Nvidia may keep its lead thanks to this move
One of the biggest reasons why Nvidia has been so successful in AI chips is its control over the supply chain. Nvidia is a fabless chipmaker. That means it designs and sells chips, but it leaves chip manufacturing to a third party. The third-party manufacturer in Nvidia's case is Taiwan Semiconductor Manufacturing (TSM 3.69%), popularly known as TSMC.

NASDAQ: NVDA
Key Data Points
TSMC is the world's largest foundry with an estimated 70% share of the market. TSMC controls this high share because it is an expert at fabricating chips using advanced process nodes (which pack in more computing power and keep energy consumption low at the same time). Nvidia uses TSMC's 4-nanometer (nm) process nodes to manufacture its popular Hopper and Blackwell series of data center graphics processing units (GPUs).
The chip designer is expected to move to TSMC's 3nm process node for manufacturing its Rubin series of AI GPUs next year. Of course, rivals AMD and Broadcom have reportedly been using TSMC's 3nm process node to manufacture their chips, but even then, they haven't been able to scale up their data center businesses to Nvidia-like levels.
For some perspective, Nvidia's data center revenue stood at $41 billion in the last reported quarter, jumping by 56% from the year-ago period. Broadcom, on the other hand, reported $5.2 billion in AI revenue in the previous fiscal quarter, while AMD's data center revenue was even lower at $3.2 billion. All these companies are fabless chipmakers, and all of them go to TSMC to get their chips made.
However, Nvidia has cornered a lion's share of TSMC's manufacturing capability. Earlier this year, there were reports suggesting that Nvidia had secured a whopping 70% of TSMC's advanced chipmaking capacity for itself. That explains why there has been such a massive gap between its AI revenue and that of its rivals.
Meanwhile, smartphone giant Apple has reportedly been buying a major share of TSMC's 3nm processors for its iPhones. The iPhone maker is also said to have cornered half of TSMC's 2nm production capacity for its next year's iPhone lineup. That probably explains why Nvidia is likely to make the jump from the 3nm process node to the A16 process node in 2028 when it releases its Feynman series of GPUs.
The A16 is a 1.6 nm process node, and that should ideally promise a big leap in computing power and power efficiency. TSMC itself points out that the A16 technology will be 8% to 10% more powerful and achieve a 15% to 20% reduction in energy consumption compared to the 2nm process node. The 2nm node itself promised a 10% to 15% improvement in performance along with a 25% to 30% drop in power consumption.
Nvidia, therefore, could deliver a much more powerful AI processor in 2028 when the Feynman architecture is released. What's more, Nvidia is said to be the only company that's testing the A16 technology with TSMC. Apple, surprisingly, hasn't entered the fray. More importantly, the fact that there isn't any news of Nvidia's AI chip rivals looking to adopt the A16 process node yet means that it can continue to corner a substantial portion of its supply for itself.
More powerful chips could supercharge the company's growth
It is easy to see why Nvidia may have scrambled to secure its position as TSMC's lead customer for the A16 process. The company is forecasting spending of $3 trillion to $4 trillion on AI infrastructure by 2030. Nvidia estimates that global data center capex is likely to increase at an annual rate of 40% over the next five years.
That's why it makes sense for Nvidia to take steps to bolster its presence in the AI chip market before rivals make a bigger dent. If Nvidia manages to deliver more powerful chips that can reduce the total cost of operating data centers owing to their lower power consumption, there is a solid chance that its customers will continue to pour more money into its offerings.
That's why it won't be surprising to see Nvidia retaining its AI chip dominance over the next five years, and that is likely to translate into more upside for investors.