Much of the artificial intelligence enthusiasm has settled on Nvidia (NVDA -2.88%) stock, and for good reason. Nvidia is the outright leader in AI GPUs needed for the massive parallel processing of AI models.
However, many large semiconductor and cloud giants are now gunning for Nvidia's lead. After Nvidia's massive 213% rise this year, investors are now debating as to whether the chip giant can maintain its first-mover advantage and sky-high margins.
But whether or not Nvidia can fend off the competition, the artificial intelligence trend seems here to stay. And whichever chip giant comes out on top, the following three stocks will benefit... probably quite handsomely.
Applied Materials
Every semiconductor that gets produced must do so via multiple manufacturing steps executed by semiconductor equipment. This semiconductor equipment industry is also technologically complex and fairly consolidated. That means for each step of the process, there are really only one to three competitive vendors, making it an attractive industry to invest in.
Applied Materials (AMAT -0.95%) has the largest and broadest portfolio in semiconductor equipment, participating in a number of these steps. Therefore, it stands to benefit from increasing complexity and capital intensity due to the AI revolution.
For instance, Applied's main etch and deposition business should get a significant boost on the 2nm node, set to come out in 2025. On the 2nm node, all major chipmakers will transition from a FinFET transistor architecture to a Gate-all-around (GAA) architecture, in which transistors can be "stacked" vertically and surrounded on all sides by the gate. That will lead to an increase in performance and power-efficiency, but will require more selective etch intensity as well.
Applied thinks the GAA transition is a $1 billion incremental opportunity alone. But because Applied touches lots of different process steps, there will be more expansion opportunities similar to GAA.
For instance, chipmakers are introducing backside power architectures on chips, which frees up room for more transistors on the "front." Then there is also advanced packaging, needed to produce, "chiplet" architectures, such as those in Advanced Micro Devices' MI300 AI accelerator. And Applied Materials even has a strong business in lagging-edge power semiconductors used in electrification and Internet of Things applications.
As long as semiconductors continue to grow and computing intensity goes up, Applied should continue to benefit.
Cadence Design Systems
Just as all chips depend on a handful of semiconductor equipment companies, all chip designers typically rely on one of only two large software companies to design integrated circuits -- and one of those lucky companies is Cadence Design Systems (CDNS 0.26%).
With the artificial intelligence wars kicking into high gear and numerous players diving headlong into the chipmaking race, it's a pretty safe bet that there will be more and more chip designers. In fact, over the past decade, Cadence has not only seen revenue growth, but a general revenue acceleration.
And unlike a lot of other unprofitable cloud software-as-a-service stocks, Cadence is solidly profitable, with GAAP operating margins in the low-to-mid 30% range. In fact, Cadence is even lowering its share count via stock buybacks as it pursues growth, increasing shareholders' percentage of the business every year. Higher profits combined with a declining share count is a recipe for long-term shareholder returns.
Given that chipmaking complexity is likely to continue thanks to AI and other chip-intensive applications, Cadence should have much higher earnings and a much lower share count by the end of this decade.
Super Micro Computer
Unlike the prior two companies that operate in duopoly or oligopoly industries, server-maker Super Micro Computer (SMCI 1.87%) operates in the much more competitive and fragmented server market.
So how does it fit in with eventual AI winners? Because Super Micro has been cementing a unique business model ideally positioned for AI, which competitors may find hard to replicate.
In the server industry, there are expensive "high end" OEMs like Dell Technologies and Hewlett Packard Enterprise, with standardized models that usually serve enterprises. On the low end are ODMs, usually based in Asia, that sell "white labeled," server parts that are often designed and assembled by tech-savvy customers such as big cloud computing giants.
In the middle is Super Micro, which used to essentially be a U.S.-based ODM. But over the years, Super Micro pivoted to a model that straddles both OEMs and ODMs. The model produces optimized server "building blocks," which Super Micro can then assemble into complete, complex, customized systems. So, you can think of building blocks akin to "legos."
The building-block system has a number of advantages, such as the ability for the customer to customize systems exactly how it wants, as well as the ability to swap out parts of a system instead of the entire server. Instead of customers buying from ODMs and constructing the systems themselves, Super Micro also does a lot of the complex integration work. In fact, Super Micro can install entire rack-scale systems, allowing customers to merely plug in a system into both a power and data source, and have the system work instantly. And because Super Micro is based in Silicon Valley with close relationships with all major chipmakers, it is often first-to-market.
Finally, Super Micro has also been at the forefront of power-efficient server designs for years, citing the ability to reduce electricity needs through its highly efficient servers and cooling systems.
Mass customization, a faster time-to-market, and electricity cost savings are each highly sought-after attributes among AI players. That's why some analysts estimate Super Micro's share of AI servers jumped from 7% to 17% in just the last quarter.
While Super Micro has a close relationship with Nvidia, should another chipmaker eventually make inroads and gain AI market share, that chip will still likely be deployed in Super Micro Computer servers in volume.