Nvidia's (NVDA 2.71%) engine of innovation is running in high gear. It's difficult to concisely summarize all of the new products and services announced at the tech company's annual spring GPU Technology Conference (GTC) that kicked off on March 21. But Nvidia's powerful new chip designs and systems aimed at artificial intelligence (AI) are a good place to start.

Nvidia unveiled three of these systems, each pushing the limits of computing power and designed to work together to help unlock the potential of the AI economy.

For shareholders, there is a lot to be excited about here.

The NVIDIA Hopper GPU chip.

Image source: Nvidia.

Introducing the new GPU, "Hopper"

Almost two years ago, Nvidia unveiled its "Ampere" architecture for data center and AI graphics processing units (GPUs), named after mathematician and physicist André-Marie Ampère. Ampere was the technology behind Nvidia's DGX A100 supercomputing system, accelerating data centers all over the world and powering all sorts of services from video game streaming to solving critical healthcare problems.

This year, CEO Jensen Huang announced "Hopper" (named after computer scientist Grace Hopper), the successor to the company's AI GPUs. The Hopper GPU is based on Taiwan Semiconductor Manufacturing's (NYSE: TSM) 4-nanometer process and has 80 billion transistors (versus 54 million transistors in a DGX A100). Hopper will power the new DGX H100 supercomputing unit for AI and high-performance computing in data centers.

Huang said Hopper is not just an update to Ampere (although new Nvidia technology is always compatible with older generation hardware). Hopper solves new problems and will achieve breakthroughs in what AI and machine learning are capable of doing. For context, Huang said that just 20 H100 GPUs have enough computational power to handle "the equivalent of the entire world's internet traffic."  

Processing data is one thing -- moving it places is another

Nvidia's ability to help its customers process massive amounts of data to power AI is well known. And thanks to the often-overlooked acquisition of Mellanox a couple of years ago, moving data is also a company specialty.  

That was the crux of the new NVLink-C2C chip, building on Nvidia's data interconnect know-how. Nvidia launched NVLink back in 2014 as a way to quickly connect the data transfer between its GPUs to other central processing units (CPUs, more on that in a moment) within a data center. The new NVLink-C2C speeds up that process and also unlocks the ability to connect new types of chips (thus the C2C moniker, for chip-to-chip). This allows customized configurations for engineers as they can now tightly link together GPUs, CPUs, DPUs, NICs, and SOCs (wow, that's a lot of acronyms) without data transfer being throttled by an interconnect not up to the task.

Surprise! "Grace" is now two CPUs in one

At the spring 2021 GTC, Huang announced an upcoming CPU for data centers Nvidia has dubbed "Grace," available in early 2023. This year, the Grace CPU Superchip was unveiled, pairing two Grace CPUs interconnected with the above-mentioned NVLink-C2C technology.

Nvidia said the Grace Superchip is flexible enough to perform on its own as a stand-alone server (jargon for a computer that operates a service). But it will excel as part of an AI system using GPU-accelerated servers with Hopper-based GPUs. Again, the NVLink-C2C can be used as the interconnect between the Grace CPU Superchip and the Hopper GPUs.  

Grace CPUs are still expected to be available during the first half of 2023. 

Technical specs in investor-speak

What is all of this technological innovation to you and me as shareholders? If Nvidia keeps a step ahead of the competition, it gets repeat business when its customers need to upgrade their hardware. It's landing new business by expanding its portfolio of silicon, which addresses more parts of the modern data center. The larger data center market is still dominated by the likes of Intel (NASDAQ: INTC) and Advanced Micro Devices (NASDAQ: AMD), so Nvidia's invasion of this space will yield plenty of growth opportunities.

Additionally, it's deepening relationships with customers by building software capabilities atop its hardware. That level of vertical integration (spanning basic chip design all the way to software services delivered to engineers, developers, and other consumers) is unparalleled in the semiconductor industry. Nvidia has gone from video game semiconductor designer to full-blown technology platform. Organizations around the globe are turning to Nvidia to implement AI into their operations, and that bodes well for the company's continued long-term success.