Tuesday was a big day for Advanced Micro Devices (AMD 3.44%), as it unveiled several new chip products as part of its Data Center and AI Technology Premiere.

All eyes were, for good reason, focused on artificial intelligence, especially the new MI300 accelerator. While AMD's new Epyc Bergamo CPU (central processing unit) will definitely play a role in advanced computing systems, the new MI300 accelerator chipset for AI is the biggest new product.

The MI300 is a monster, composed of 13 chiplets, combining CPUs and GPUs (graphics processing units) with high-bandwidth memory. AMD's plan for it is to challenge the Nvidia (NVDA 3.77%) H100 in training and running large language models.

Interestingly, AMD's stock fell on the day of the presentation and Nvidia's stock rose, perhaps suggesting some disappointment about the MI300. However, that may be a bit shortsighted, as the chip has yet to even enter production, and it's still early days in the AI races.

Why AMD's stock might have fallen

Investors may have sold AMD stock on the news of the MI300 for multiple reasons. It's likely those reasons didn't even relate to what was in the presentation, but rather to what was missing.

First, AMD didn't announce an "anchor" customer that had agreed to roll out the MI300 in its AI data centers. That may make sense, as the chip will only begin sampling to major cloud and data center providers in the third quarter, with production to ramp up in the fourth. Some analysts believe that means the MI300 won't be widely available until mid-2024; that would put it about 18 months behind Nvidia's H100, which began shipping in volume at the end of 2022 and beginning of 2023.

Eighteen months is a long time, and it could allow Nvidia to become more entrenched in the AI computing ecosystem. That's especially true as Nvidia's CUDA software platform, which allows developers to program graphics chips for computational processing and AI tasks, achieves more of a network effect. In the presentation, AMD added that it had developed its own open-source software stack called ROCm, but also acknowledged it had a long "journey" ahead on the software front.

Back in May, Ars Technica noted that Microsoft was sharing AI development resources with AMD, in an effort to build a more viable alternative to Nvidia's dominant position. Some investors may have thought that on Tuesday AMD would also announce a commitment to the MI300 from Microsoft. However, no such announcement was made.

Finally, nothing was said about speed metrics for the MI300's inference and training capabilities. In the presentation, not much was disclosed outside of what was already known. CEO Lisa Su did confirm that as reported before, the MI300 contains 13 chiplets, combining AMD's Zen 4 CPUs and CDNA graphics engines with high-bandwidth memory (HBM). AMD also unveiled the MI300X, an alternative model with all GPUs and no CPUs. Still, other specs were scarce.

But all is not lost for the MI300 in the AI market

Despite the concerns, all is certainly not lost for AMD in its battle with Nvidia. While some performance details regarding the MI300 and Nvidia's H100 weren't disclosed, AMD did note that the MI300X can handle a higher workload than the H100, due to its massive size and access to 192 gigabytes of HBM. Su said the MI300X can handle 2.4 times the density and 1.6 times the bandwidth of the Nvidia H100's HBM, and that a single MI300X can run a large language model with up to 80 billion parameters, which AMD claims is a record.

So, while some may have been underwhelmed at the lack of further performance specs, AMD is hoping to differentiate itself by allowing partners to run models with fewer GPUs overall. That could offer total-cost-of-ownership savings, especially as Nvidia chips are so expensive. As the MI300 is also built on chiplets, AMD offers several versions of the chipset that can be mixed and matched to give customers more choice.

In addition, on Wednesday, Amazon Web Services said it was "considering" the MI300 for its AI data centers. While that's certainly not a definitive commitment, it's clear that major cloud providers would love to have a second alternative in the AI accelerator space.

All in all, AMD has been able to successfully innovate in and lead the CPU space. So it's likely it can gain some foothold in the AI accelerator space too, with time.

Meanwhile, several analysts raised their price targets on AMD Wednesday, and the stock price recovered most of Tuesday's decline. Analysts appeared to acknowledge that the MI300 is a big step forward for the company.

While the new chip might not yet be able to beat the Nvidia H100 head-to-head in some respects, it should be able to gain some market share in the massive AI accelerator market -- which Su believes should grow 50% per year to $150 billion by 2027, up from $30 billion today.