Roughly 30 years ago, the advent and mainstream proliferation of the internet began changing the corporate landscape. Although it took years for this technology to mature and for businesses to fully harness it to maximize their sales and profits, the internet has had a profoundly positive impact on the growth trajectory of corporate America.
Investors have been waiting decades for Wall Street's next internet moment -- and artificial intelligence (AI) has answered the call.
The prospect of enabling software and systems with the tools to make split-second decisions without the need for human oversight is a potential game changer in most industries around the globe. This is a significant reason why PwC analysts have estimated that AI will contribute a staggering $15.7 trillion to global gross domestic product (GDP) by 2030.
Although investors can expect a long list of winners with this multitrillion-dollar opportunity, there's little doubt that graphics processing unit (GPU) maker Nvidia (NVDA +1.83%) has been the leading beneficiary of the rise of AI. It's grown from a $360 billion tech company at the start of 2023 to Wall Street's largest publicly traded company -- and the first to (briefly) reach the $5 trillion plateau.
Image source: Nvidia.
Last week, Nvidia's fiscal third-quarter operating results (its fiscal year ends in late January) highlighted the benefits of its first-mover advantage. However, one of the company's biggest flexes may have also exposed a serious future growth weakness.
It's business as usual for the most important company in the tech sector
Nvidia blowing past Wall Street's consensus sales and profit expectations is bordering right up there with death and taxes in life's certainties. The company delivered $57 billion in sales, representing 62% sales growth from the prior-year period, with generally accepted accounting principles (GAAP) net income of $31.9 billion. The latter is up 21% from the sequential quarter and 65% from the comparable quarter in the previous year.
These figures shouldn't come as a surprise to those who've been tracking Nvidia's innovations and dealmaking. Its multiple generations of GPUs, including Hopper, Blackwell, and Blackwell Ultra, are the undisputed preferred option in AI-accelerated data centers. Said CEO Jensen Huang:
Blackwell sales are off the charts, and cloud GPUs are sold out. Compute demand keeps accelerating across training and inference -- each growing exponentially.
Furthermore, these chips have proven superior to all external competitors in compute capabilities for enterprise data centers. This first-mover advantage, coupled with ongoing AI-GPU scarcity, has translated into phenomenal pricing power and a GAAP gross margin that's been lifted well above 70%.

NASDAQ: NVDA
Key Data Points
Nvidia's CUDA software platform has also been pivotal to the success of its hardware. CUDA is effectively the toolkit developers use to maximize the compute capabilities of their Nvidia GPUs when building and training large language models, running high-frequency trading algorithms, or overseeing scientific simulations, among other tasks. This software has anchored buyers to the Nvidia brand and kept them within its product and service ecosystem.
Artificial intelligence is exhibiting all the hallmarks of a truly game-changing technology. But this doesn't mean the parabolic ascent of Nvidia's stock, or its jaw-dropping sales growth, is sustainable.
Nvidia's big flex points to a future shortcoming
Similar to Nvidia's earnings press release, its quarterly conference call with analysts was packed with optimism and data points indicative of continued double-digit sales growth. However, one flex, courtesy of Chief Financial Officer (CFO) Colette Kress, may have unearthed a massive risk to her company's long-term growth prospects.
While delivering remarks prior to the executive team fielding questions from analysts, Kress said the following:
Most accelerators without CUDA and Nvidia's time-tested and versatile architecture became obsolete within a few years as model technologies evolve. Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full utilization today, powered by [a] vastly improved software stack.
On one hand, this really emphasizes the high-margin value CUDA brings to the table. While the focus seems to be on the compute potential of Nvidia's hardware and Jensen Huang's aggressive innovation timeline that brings a new AI-GPU to market annually, CUDA might be the unsung hero for Nvidia.
Image source: Getty Images.
On the other hand, Kress's statement reveals a potentially significant issue for Nvidia. If the company's Ampere (A100) chips from six years ago can be supported by software improvements via CUDA, what incentive do existing clients have to upgrade their AI-data center hardware after five or six years?
Huang's amped-up innovation timeline, which is expected to bring the Vera Rubin and Vera Rubin Ultra chips to market by the latter half of 2026 and 2027, respectively, counts on AI hardware demand to remain robust. But if Nvidia's prior-generation GPUs offer utility well beyond their initial expectations, it would make sense for most businesses to delay their upgrade cycles. This would prove disastrous for Nvidia's pricing power on advanced AI chips and weaken its GAAP gross margin.
Additionally, the price of prior-generation GPUs continues to deteriorate, even if utilization remains robust. Kynikos Associates founder and noted short-seller Jim Chanos pointed out in a post on X (formerly Twitter) last week that the Hopper (H100) GPU Rental Index has declined by 30% in the 15 months since its inception. The Hopper was the next-generation chip introduced after Ampere.
If the price of Hopper and Ampere continues to decline following the release of next-generation AI-GPUs from Nvidia, it'll offer even more incentive for businesses to hang onto their existing hardware. In other words, the effectiveness of CUDA in keeping Ampere relevant can cost Nvidia significant growth potential in the years to come if clients shy away from outlaying potentially billions of dollars to upgrade their data center infrastructure.
This may be a rare instance of an Nvidia flex completely backfiring on the company -- but we won't know for sure until a few years from now, when GPU upgrade cycles should commence.