Nvidia (NVDA -1.10%) has been the clear leader in artificial intelligence (AI) chips since the start of the AI boom. Its graphics processing units (GPUs) have become the go-to hardware for training large language models (LLMs) because they can handle the huge number of calculations that power AI.
The company's early bet on building out its CUDA software platform gave it a moat no one else has been able to crack, since most AI models have been written for CUDA over the years. That tight link between its hardware and software is why Nvidia owns more than 90% of the GPU market today.

Image source: Getty Images.
That said, as more of AI moves from training to inference, Nvidia's dominance faces new challenges. Training only happens when a model is first built, while inference runs every time that model is used to answer a question or generate an output. Inference will likely become the much larger market over the next five years, and price and efficiency matter more than raw performance here. That shift opens the door for other chipmakers to grab some share.
While Nvidia is still very well positioned to be an AI winner, smaller AI leaders could outperform in the coming years, also simply due to its massive size. Let's look at two stocks that could outperform it by 2030.
Broadcom
Broadcom (AVGO -0.72%) is becoming one of the most important players in AI as big tech companies look for ways to cut costs on inference. The company designs application-specific integrated circuits, or ASICs, which are custom chips built for a single job. They are preprogrammed, so they lack the flexibility of GPUs, but they tend to be faster and more energy efficient for the task they are designed to do.
Broadcom built its reputation with ASICs by helping Alphabet design its tensor processing units (TPUs), which now power a large share of Google Cloud's AI workloads for both training and inference. That success brought in other major customers like Meta Platforms and ByteDance, which, together with Alphabet, the company has said, are a combined $60 billion to $90 billion serviceable market opportunity in fiscal 2027.
Broadcom also recently revealed that a fourth large customer, widely believed to be OpenAI, has already placed a $10 billion order for next year. Reports suggest Apple is also working with Broadcom on its own AI chips. With Broadcom set to generate just over $63 billion in total revenue this fiscal year, ending Nov. 2, its custom chip opportunity is just huge over the next several years.
If it can capture much of this opportunity, Broadcom's stock could outperform Nvidia's over the next several years.
AMD
Advanced Micro Devices (AMD 23.61%) has long been the No. 2 GPU maker behind Nvidia, but the move to inference is giving it an opening to grab more market share. With inference running every time an AI model is used, cost and efficiency become critical, and AMD is finding a niche here.
A big part of that has been progress with its ROCm software platform, which has improved enough to handle inference workloads effectively. While it still trails Nvidia's CUDA by a wide margin in training, ROCm 7 is good enough for many inference applications where price and power efficiency matter more than peak performance. One major AI player is already running a large portion of its inference traffic on AMD's GPUs, and 7 of the top 10 AI operators now use some of its hardware.
AMD, together with Broadcom, is also a founding member of the UALink Consortium, which is an open-source alternative to Nvidia's proprietary NVLink that would allow chips from different vendors to work better together. If that becomes the standard, it would be a huge boost for a company like AMD.
Last quarter, AMD produced roughly $3 billion in AI data center revenue compared to more than $40 billion for Nvidia. Given AMD's much smaller revenue base, even modest share gains in inference could drive significant growth over the next five years and help its stock outperform.