Nvidia (NVDA 1.69%) and its ultra-powerful data center GPUs are at the center of the artificial intelligence (AI) revolution. Advanced AI models like those powering OpenAI's ChatGPT require serious computational horsepower to train and run. Thanks to more than a decade of work building up the ecosystem around its chips, Nvidia is the runaway market leader.

Nvidia may not remain so dominant forever. Both Advanced Micro Devices (AMD 4.94%) and Intel (INTC 1.48%) are gunning for the AI accelerator market. While Nvidia won't be dethroned anytime soon, AMD and Intel are seeing exploding interest in their own AI solutions.

Advanced Micro Devices

AMD isn't new to the world of data center GPUs, but it's so far largely missed out on the AI explosion. Part of the problem is software. Nvidia has been chugging away since 2006 on its CUDA compute platform, which only works with its own GPUs and has become the industry standard. There's a massive ecosystem built around CUDA, making Nvidia's GPUs the path of least resistance for essentially any acceleration workload in the data center.

AMD is looking to change this situation with a two-pronged approach. On the hardware front, the company is set to launch a family of ultra-powerful chips under its Instinct brand. This includes the MI300X, a pure GPU aimed at heavy-duty generative AI training workloads, and the MI300A, which includes CPU and GPU cores.

On the software front, AMD is working on its open software ecosystem called ROCm. Nvidia has a huge head start, but the latest release of ROCm supports popular machine learning frameworks including TensorFlow and PyTorch. ROCm has been around since 2016, but AMD is now making a renewed push as demand for AI accelerators soars.

While AMD has yet to realize any meaningful financial benefits from the AI boom, that could change later this year and in 2024 as its powerful MI300 hardware launches. The company said that AI cluster engagements in the second quarter were up by a factor of 7 over the first quarter, suggesting that there's intense interest in these new chips. Software puts AMD at a disadvantage, but AMD will be launching into a market that's clamoring for options beyond Nvidia's pricey GPUs.

Intel

Chip giant Intel has long dominated the market for data center CPUs while largely missing out on the market for data center accelerators. As part of its multi-year turnaround strategy, the company now has multiple ways to go after soaring demand for AI chips.

First, Intel's chronically delayed Sapphire Rapids server CPUs feature built-in accelerators, some of which are tailor-made for AI inference workloads. Intel claims to beat rival AMD's latest server CPUs by as much as a factor of 7 in AI workloads. While Intel's CPUs aren't capable of training or running large, advanced generative AI models, they could be a cost-effective solution for running smaller models and other light-duty AI tasks.

Beyond CPUs, Intel offers its Max series data center GPUs and its specialized Gaudi line of AI chips. Gaudi comes from the 2019 acquisition of Habana, and while the current Gaudi2 chips aren't as fast as NVIDIA's latest data center GPUs, they are competitive when factoring in price and performance.

Cloud computing leader Amazon Web Services already offers instances powered by Gaudi chips, and Intel is working on the next-gen version of Gaudi to tap into exploding demand. The company saw a six-fold increase in its accelerator pipeline in the second quarter, largely driven by Gaudi. Like AMD, Intel is at a disadvantage when it comes to software, although Gaudi supports popular AI frameworks.

Intel can't beat Nvidia on raw AI training performance, but the company can compete on the total cost of ownership. Given how expensive it can be to train large AI models, that looks like a winning strategy.