Nvidia (NVDA 4.05%) shareholders might have had a confusing week last week. On the one hand, the company reported second-quarter earnings that smashed analyst expectations, along with forward guidance for 18.5% quarter-over-quarter growth. That annualizes to 74% growth. 

Yet after an initial after-hours spike, investors actually sold the stock off following the report.

With Nvidia's stock running so much higher this year -- the stock has more than tripled, after all -- it appears Nvidia's stock may have priced in a lot of the good news already.

But has it, really? If the growth of the accelerator market and Nvidia's moat are the two elements that determine future value, Nvidia may have further upside left. 

What Wall Street's pricing in

Nvidia's revenue and margins have skyrocketed, with revenue up 101% on the back of 171% data-center growth last quarter. Incredibly, that jump occurred with only a modest 10% increase in operating expenses. As a result, net margins expanded from just 19.2% in the year-ago quarter to a whopping 50% last quarter.

A 50% net margin is usually only reached by the very top echelon of competitively advantaged, widely scaled companies. So, there seems to be some debate as to whether Nvidia will be able to maintain those margins going forward in the competitive semiconductor business.

Currently, analysts' estimates vary widely when looking out beyond this year. Earnings estimates for the current fiscal 2024 year (ending in January) range from $7.40 to $11.49 per share. And estimates for fiscal 2025, which is really calendar 2024, vary to an even greater degree, ranging from $10.06 to $23.96.

If Nvidia lands at the high end of next year's range, then it would be priced around 20 times next year's earnings -- not that far away from a market multiple. That would seem a pretty reasonable price to pay for the GPU leader in the early days of the AI revolution.

Of course, reaching that lofty goal depends on a few things.

straight on view of blue microchip with electric currents.

Image source: Getty Images.

Why Jensen Huang thinks growth can be sustained

There is some considerable debate as to how big the artificial intelligence build-out will go. Some analysts believe that generative AI is so revolutionary that its build-out and investment momentum will go on for years. After all, if enterprises need generative AI as table stakes to compete in their relative industries, everyone will have to invest and likely pay Nvidia handsomely to do it.

Yet there are also some who believe there's a limit to AI's usefulness and that the current spending surge will burn out in time as AI investment reaches its upper limit sooner than some think.

However, Nvidia's jump is about more than just AI as CEO Jensen Huang explained on the recent conference call with analysts. While AI is a "killer app" of accelerated computing, Huang sees future growth more as GPUs largely displacing CPUs for all kinds of general computing applications as well:

what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that's already in the data center. And by using it, you offload the CPUs, you save a ton of money, an order of magnitude in cost and order of magnitude in energy, and the throughput is higher. And -- and that's what -- that's what the industry is really responding to.

If GPU accelerators eventually replace a lot of the CPU-based servers for general computing, then Nvidia's data-center opportunity is far larger than just AI. 

And how big is Nvidia's moat?

Of course, with GPU-based computing taking off now in earnest, rivals such as Advanced Micro Devices and Intel will be coming after Nvidia's pie, along with cloud companies' in-house designs and probably a few start-ups too.

But Huang also outlined why it will be difficult for would-be competitors to displace Nvidia chips due to three elements working in concert.

First, Nvidia's GPUs have been engineered for maximum flexibility over 20 years, with the ability to do simple computing all the way to the most complex AI workflows. This is due to Nvidia's hardware architecture and CUDA software stack working in concert.

from multiple instances per GPU to multiple GPUs, multiple nodes, to entire data center scale. So, this runtime called NVIDIA AI Enterprise has something like 4,500 software packages, software libraries and has something like 10,000 dependencies among each other. And that runtime is -- is, as I mentioned, continuously updated and optimized for -- for our install base for our stack. And that's just one example of what it would take to get accelerated computing to work that the number of -- of code combinations and type of application combinations is really quite insane. And that's taken us two decades to get here.

So while cloud players' in-house chips or others may be able to optimize for certain computing functions, Huang believes it will be difficult to replicate the sheer flexibility of Nvidia chips due to two decades of hardware and software engineering working in concert.

Second, Nvidia dominates the GPU space right now, giving it economies of scale and a beneficial network effect with developers. It's pretty simple; software developers want to build on the platform with the broadest reach, so they must build today for Nvidia's architecture. And the more developers develop for Nvidia chips, the more Nvidia becomes a standard, attracting even more developers, and so on.

Finally, Huang points out that Nvidia is innovating at a faster pace today, which can make it even harder for rivals to catch up. The company is coming out with a new hardware architecture every two years but also developing new products within that architecture every six months.

Of note, Nvidia did announce a big quarterly step-up in its operating expenses to $2.95 billion next quarter, around a 10% quarterly increase, or 40% annualized, from the $2.7 billion spent last quarter.

Nvidia will have enormous cash flows coming in now, which should allow it to invest heavily in research and development at a much higher pace than peers while also expanding margins, making its innovation engine difficult to catch.

Odds are in Nvidia's favor

There is considerable debate around Nvidia, and shares may no doubt experience a pullback at one time or another. But as it stands now, the move from CPUs to GPUs for more than just AI, along with the strengthening advantages of Nvidia's ecosystem, keeps me on the bullish side of the Nvidia argument for now.