At its recent AI event, Advanced Micro Devices (AMD 2.37%) fired shots and claimed supremacy in AI inference above AI system pioneer and leader Nvidia (NVDA 6.18%). AMD stock has been in rally mode as investors ratchet up their expectations in 2024 for the scrappy chip designer.

Nvidia fired back, though, dousing AMD's claims with a full bucket of cold water. What are investors to make of these "AI chip supremacy" claims, and is there a simple way for investors to measure real AI business performance?

Nvidia volleys, and AMD doubles down on claims

In a previous article, I cited AMD's claim that its newest AI system, the upcoming MI300X, outperforms the competition in AI inference by 1.4x to 1.6x. AI inference is computing work after an AI model has been trained, and users begin using the model for answers to questions or performing some other task.

AMD was specifically comparing the MI300X to Nvidia's DGX H100 system, which will be making way for the higher-performance DGX GH200 system later in 2024. That latest and greatest from Nvidia will likely turn the tables once again.

But as for the MI300X versus H100 claims, Nvidia was quick to point out a few days later that AMD's performance benchmarks didn't utilize the H100 with Nvidia's proprietary-software stack running on it, which, by the way, is bundled with the H100 system for no additional cost (though the H100 does cost a premium) as much of Nvidia's software for its chips and systems usually is. Instead, AMD chose to use an H100 and MI300X both using open-source software as a comparison.

At any rate, Nvidia claims that when benchmarked "properly," the H100 remains some 2x faster than the MI300X.

Just to add to the intrigue, a couple of days after that, AMD doubled down on its benchmarking claims with some updates of its own. AMD continues to improve its own AI software stack, called ROCm, and it says further optimizations have made the MI300X even more appealing for AI inference versus the H100.

AMD's latest slide claiming it still outperforms the Nvidia H100 by up to 2x, using its benchmarking.

Chart source: AMD.

What is an investor to believe?

All of these benchmarking claims in the AI race can be confusing or even downright meaningless for investors with minimal technical background in AI and computing system engineering. Is there an easier way to keep up?

Lean heavily on financial results in the AI race. After all, what better metric could there be for the layperson than one following the purchasing decisions of the folks actually building AI infrastructure. Though it's early on in the global buildout of new AI infrastructure, Nvidia has already established itself as a leader, and it won't be all that easy for AMD (or even Intel (INTC -9.20%)) to simply catch up.

Company

Q3 Data Center and AI Segment Revenue

YOY Increase (Decrease)

Nvidia

$14.5 billion

282%

Intel

$3.8 billion

(10%)

AMD

$1.6 billion

0%

Source: Nvidia, Intel, and AMD. Note: Intel and AMD data is for Q3 of fiscal 2023. Nvidia data is for Q3 of 2024, which ended October 2023. YOY = year over year.

As for AMD, will 2024 be the year it can start to turn the tables on Nvidia's AI dominance? Perhaps, although the only specific guidance made public points to AI GPU (including the MI300X) revenue of "at least $2 billion" expected for 2024. That makes AI systems a hit for AMD, but it will still have plenty of gap to fill with Nvidia, as Nvidia also expects to continue growing its data center and AI business.

AMD and Nvidia stocks both trade for a bit of a premium, using Wall Street analysts' early assumptions for what is expected to be a year of blistering earnings growth for both semiconductor companies. Nvidia trades for 25 times next year's consensus-earnings estimates, but AMD trades for nearly 37 times expected earnings. Given this situation, both stocks have a lot to gain over the next decade, but a premium price makes them dollar-cost-average candidates in my estimation.