Over the last decade, NVIDIA (NVDA -2.48%) hardware has become synonymous with cutting-edge graphics and high performance computing. Not surprisingly, shareholders have done quite well for themselves. The stock is up a whopping 1,300% in the last five years alone, crushing the performance of the broader market.

However, this chipmaker still has plenty of room to grow its business, especially in the data center space. Here's what investors should know.

The leader in accelerated computing

In 1999, NVIDIA invented the graphics processing unit (GPU), a chip capable of parallelizing computing-intensive code. In other words, GPUs can handle lots of data very quickly. While these chips were originally created to render ultra-realistic video game graphics, they have since become an important part of data centers.

NVIDIA corporate headquarters at dusk.

Image source: NVIDIA.

Specifically, NVIDIA GPUs have become the gold standard for accelerating workloads like analytics, artificial intelligence, and scientific computing. In fact, the company recently set records at the MLPerf benchmarks, a series of tests designed to measure the performance of AI compute platforms. And NVIDIA currently controls 90% of the market for supercomputer accelerators.

Data center budgets were still heavily skewed toward central processing units (CPUs) in 2020, with these chips comprising 83% of total spend on processors. But Ark Invest believes that figure will drop to 40% over the next decade. In other words, by 2030, GPUs will not only be the dominant data center accelerator, they will also be the dominant processor.

That's good news for NVIDIA. Management puts its addressable market in the data center business at $100 billion by 2024, but that figure should be even bigger by 2030. And given the company's strong competitive position, NVIDIA is well positioned to capture the lion's share of that sum.

Data center technician standing beside server racks.

Image source: Getty Images.

A three-chip company

NVIDIA has also expanded beyond its trademark GPU. In 2020, it completed its $7 billion acquisition of Mellanox, a company that specializes in high-performance networking solutions. Since the merger, NVIDIA has introduced a new chip that incorporates Mellanox technology: the data processing unit (DPU).

This chip offloads networking, storage, and security tasks from CPUs, boosting performance and efficiency. More broadly, the Mellanox acquisition makes NVIDIA's compute platform more robust, enabling the company to optimize workloads across computing, networking, and storage, which should drive market share gains in the data center.

More recently, NVIDIA announced a third chip: the Grace CPU. This processor is slated to launch in 2023, and features energy-efficient ARM cores that pack ten times the performance of today's fastest servers. The Grace CPU will join the GPU and DPU to complete NVIDIA's compute platform, and there's reason to believe it will be a success.

For the last two decades, CPUs built on x86 architecture -- think Intel and AMD -- have dominated the data center, and the pair captured 92% market share in 2020. But Ark Invest believes that will change over the next 10 years, because ARM CPUs are becoming faster and cheaper. To that end, Ark estimates that ARM and RISC-V will hold 71% market share by 2030, while x86 chips will drop to 27%.

Again, this is good news for NVIDIA. If this trend does indeed play out, investors should expect the Grace CPU -- which is built on ARM cores -- to be a key growth driver.

The bottom line

NVIDIA is now a three-chip company, meaning its product portfolio now addresses a greater portion of data center infrastructure. And in the years ahead, NVIDIA should see strong demand as GPUs become the most prevalent data center processor, and ARM chips become the most prevalent data center CPU.

That's why I think NVIDIA will dominate the data center by 2030.