The demand for chips that power artificial intelligence (AI) applications is expected to grow rapidly in the long run, with Precedence Research estimating that the AI chip market could clock annual growth of almost 30% over the next decade.

This will open a massive opportunity for the likes of Nvidia (NVDA 3.71%) and Advanced Micro Devices (AMD 1.33%), as both companies are in the business of making chips that form the backbone of AI infrastructure. Graphics processing units (GPUs) -- which both AMD and Nvidia sell -- are playing a critical role in the training and inference of AI models, thanks to their massive parallel computing power.

ChatGPT, one of the most popular generative AI apps out there, was reportedly trained with the help of 10,000 Nvidia GPUs, according to Timothy Arcuri of UBS. Not surprisingly, there are concerns that a GPU supply shortage could arrive, as generative AI applications are expected to create the need for hundreds of thousands of GPUs. This explains why the global GPU market is expected to generate a massive $451 billion in revenue by 2030.

So which one of these two chipmakers is in a better position to take advantage of this lucrative opportunity? Let's find out.

Nvidia is racing ahead in AI

Nvidia's GPUs already power generative AI applications. This is a market in its early stages of growth. Grand View Research forecasts the generative AI market to clock annual growth of nearly 35% through 2030. So Nvidia's early-mover advantage in this space means it is quickly building a sizable customer base for its graphics cards that power AI applications.

OpenAI is using the company's A100 GPUs to power ChatGPT, and it will now deploy the latest-generation H100 Hopper graphics cards to train Microsoft's Azure supercomputer for AI research. Meta Platforms also deployed the H100 GPUs in its AI supercomputer known as Grand Teton to power both its training and inference of deep-learning models.

Meanwhile, other generative AI platforms, such as Stability AI, Twelve Labs, and Anlatan, are tapping the H100 GPUs to train different kinds of applications. In addition, Nvidia's enterprise-class DGX H100 systems -- which combine multiple H100 GPUs along with central processing units (CPUs) and fast network connectivity to help enterprises tackle large AI workloads -- are also gaining traction across the globe.

Japanese digital advertising and internet services provider CyberAgent is using the DGX H100 systems to create AI-generated ads. Additionally, Johns Hopkins University's Applied Physics Laboratory is also tapping these systems to train large language models.

All this indicates that Nvidia is on its way to making the most of the multibillion-dollar AI GPU opportunity that could give its revenue a significant boost.

Nvidia CEO Jensen Huang remarked in March that the company saw an acceleration in demand for its H100 systems. Not surprisingly, Analyst Atif Malik of Citigroup estimates that ChatGPT could alone bring in as much as $12 billion in revenue for Nvidia over the next year. That figure could go higher, given the multiple companies tapping the company's GPUs.

AMD has a lot of work to do

Artificial intelligence was a key theme on AMD's latest earnings conference call. The term "AI" was mentioned 46 times on the call. However, a closer look indicates that AMD is behind Nvidia in this market.

AMD is still formulating its AI strategy and has created a separate division to "execute our broad AI strategy and significantly accelerate this key part of our business," as CEO Lisa Su pointed out on the earnings call. The company also has a lot of catching up to do on the product development front. Its MI300 GPUs, which are meant for the training and inference of large language models, are yet to hit the market.

AMD points out that "customer interest has increased significantly for our next-generation instinct MI300 GPUs," but investors should note that these chips are yet to be launched commercially. Of course, the MI300 GPUs could unlock a big opportunity for the chipmaker, but they could run into heavy competition from Nvidia.

The MI300 chips are expected to combine CPUs and GPUs on a single platform so that they can train large language models quickly. But Nvidia is set to introduce its integrated platform, known as the Grace Hopper Superchip, which it says has been designed for handling large-scale AI applications from the ground up by combining a CPU, a GPU, and high-bandwidth memory.

A report from Bloomberg suggests that AMD and Microsoft could be working together to develop AI processors as an alternative to Nvidia's chips. But again, this further indicates that AMD is trying to catch up with Nvidia. In all, it remains to be seen how AMD's foray into this space turns out and if it can challenge Nvidia's dominance, which reportedly controls over 90% of the data center GPU market.

The verdict

It's evident that Nvidia is the leading player in the AI GPU market, and AMD has catching up to do. Wall Street anticipates Nvidia will earn billions of dollars from selling GPUs for generative AI applications over the next year, and that could help the company offset the weakness in the personal computer (PC) market that has been AMD's Achilles' heel of late.

Nvidia gets a big chunk of its revenue from the data center business, which accounted for nearly 60% of its top line in the fourth quarter of fiscal 2023. AMD, on the other hand, gets just under a quarter of its revenue from the data center segment. So AI chips are likely to move the needle in a bigger way for Nvidia compared to AMD, especially considering the former's strong position in this space. That's why investors looking for an AI stock would do well to pick Nvidia over AMD.