Perhaps the hottest question in technology investing circles today is whether Nvidia's (NVDA 1.63%) eye-popping growth and margins displayed over the course of the past two quarters is sustainable. If artificial-intelligence (AI) accelerators grow at a 50% annualized growth rate for the next five years as some think, Nvidia's stock may actually be cheap. But if competitive forces begin to eat away at its AI dominance, Nvidia's stock may be as expensive as its 100 P/E ratio looks.

At a recent industry conference, Lisa Su, the CEO of Nvidia competitor Advanced Micro Devices (AMD -4.00%), said of Nvidia's AI moat, "I'm not a believer in moats when the market is moving as fast as this."

That statement supposes Nvidia's current lead in the dynamic AI space isn't assured, even though it has a multiyear head start in AI accelerator hardware and software development.

But beyond rhetoric, here is how leading tech giants from AMD to others are looking to breach Nvidia's castle.

Nvidia's CUDA moat: Real or perceived?

The thinking among many investors today is that Nvidia has a big lead in AI, not only from its hardware innovations, but also, importantly, from its CUDA software stack. CUDA was developed to allow graphics chips to be programmed for parallel processing regular data, thus enabling artificial intelligence training and inference.

Software moats can be powerful; look no further than Microsoft's (MSFT -0.39%) Office suite, which includes PowerPoint, Excel, and Word. Once it became the standard upon which most business got done, everyone began to learn Office. Once a critical mass of users all began learning Office, it became a taller and taller task for an upstart software alternative to come in with a competitive product. This is called a network effect.

Has Nvidia achieved an impenetrable network effect with CUDA? Lisa Su doesn't seem to think so.

There are a couple of reasons Nvidia's CUDA might be more vulnerable to disruption than Microsoft Office. First, Nvidia's GPUs are prohibitively expensive, currently going for $30,000 or more per chip. Given that AI systems require thousands of these chips, there is a huge incentive for the large cloud platforms and other AI customers to seek out a competitive alternative. On the other hand, Microsoft Office really isn't that expensive of a product in the overall scheme of things for an enterprise.

Moreover, AMD and Intel (INTC -1.02%), as well as large tech giants such as Meta Platforms (META -1.13%), Alphabet (GOOG 0.07%) (GOOGL 0.14%), and Microsoft are all contributing to open-source alternatives. These are massive companies with lots of developer resources that should be able to generate a viable multichip platform alternative for the age of AI.

Finally, we are still in the relatively early innings of the AI boom, which started in earnest just a year ago with the premiere of OpenAI's ChatGPT. So if these competitors move fast enough, it's possible a formidable open-source competitive platform could catch on before Nvidia's moat hardens further.

programmer in front of computer screen with code.

Can big tech develop an alternative to CUDA? Image source: Getty Images.

RocM and SYCL

At both of their recent AI and data center chip presentations, both Intel and AMD presented their CUDA alternatives. Unsurprisingly, each company touted the benefits of an open platform, in which each company's own in-house software can be ported to different GPUs, while also integrating with current open-source AI software leaders.

Examples of leading open-source platforms are Pytorch, which gets contributions from all major tech giants but started at Meta; Tensorflow, which started at Alphabet; Deepspeed, which is an AI framework that comes from Microsoft; and AI software start-up Hugging Face.

But the more interesting attribute of both AMD's and Intel's software stacks is that each has a portability feature that aims to enable software developers to take programming code written in CUDA and port that code over to their other platforms for use on other hardware, with as little recoding as possible.

AMD's software stack is called ROCm, which will serve AMD's Instinct line of AI chips such as the MI300 just hitting the market today. ROCm is actually in its fifth generation, and which AMD has hailed as "mostly open," with optimizations for both Pytorch and Hugging Face. Importantly, it has a porting feature that allows developers to port over code from other GPUs. Without naming names, that's likely Nvidia and CUDA.

Similarly, Intel has a slew of new software programming software, including significant Intel contributions to Pytorch and others. Intel is also encouraging an open-source AI programming platform called SYCL, first developed by the Khronos Group. SYCL is a higher-level open-source C++ software that allows developers to write code and use it on any accelerator.

Intel also released a tool last year called SYCLomatic, which allows developers to port over code from CUDA into SYCL. The most recent results showed the ability to port over 90% of a CUDA code to SYCL, with only minor tweaks needed to make CUDA code compatible across different accelerators.

If there's no moat, it will come down to a tougher hardware fight

Clearly, Nvidia has a big lead in AI chips. But AMD just unveiled its MI300, which also has significant capability thanks to its "chiplet" architecture. Meanwhile, Intel has its Gaudi line of AI accelerators that are at least good enough to be utilized by one high-profile generative AI start-up, Stability AI. And both competitors will surely invest a lot toward the AI accelerator market, given its hypergrowth.

The case for Nvidia to retain its lead and run away with the AI market is probably predicated on the network effects of CUDA, as hardware superiority can be somewhat fleeting. Intel, of course, knows this all too well, as it lost its lead for the most advanced chips around five years ago after being the dominant hardware player in CPUs for many years.

It's likely the AI market could be big enough where all three companies will thrive together. But investors -- especially Nvidia investors -- will have to constantly monitor the AI software competition, as it could mean the difference between dominant growth and 50%-plus net margins we saw last quarter, or more industry-standard margins in the 20%-30% range historically seen in leading-edge processors.