While each "Magnificent Seven" stock is exciting in the artificial intelligence (AI) era, none is as exciting as Nvidia (NVDA 6.18%). For the past year, the company has exceeded even the tremendous expectations placed on it ever since last May's blowout earnings report. And this week, its GTC developer conference didn't disappoint.

During the conference, CEO Jensen Huang introduced Nvidia's new GPU chip named Blackwell, which will be out later this year. It's Nvidia's next-generation GPU architecture after Hopper, which has taken the tech world by storm over the past two years.

The sheer computing accomplishments of Blackwell highlight Nvidia's importance and why it may become the world's most valuable company one day, up from the third-most valuable today.

Blackwell is a beast

While the Hopper chips currently in the market pack quite the punch and are responsible for ushering in the modern AI era, Blackwell trounces its predecessor and is likely to enable some truly eye-opening AI applications. First, it's physically bigger than Hopper. CEO Jensen Huang held both chips side by side during the presentation, and Blackwell looked about twice the surface area of Hopper.

That's not by accident. The Blackwell chip spans two semiconductor dies linked together with advanced packaging, with both sides thinking they're part of a single chip.

Blackwell boasts 208 billion transistors, up from the "mere" 80 billion on Hopper. So even if Blackwell comprises two dies, it still boasts an extra 24 billion transistors per die, packing even more computing punch per surface area.

The result is a big leap in performance. Running in a basic mode, Blackwell is 2.5 times the performance of Hopper for training applications. But with Nvidia's new compression technology running the chip for inference, Blackwell is five times the performance.

Stringing Blackwells together leads to supercomputing power in a single rack

One product format Nvidia has innovated is the idea of a "superchip" that combines two GPUs with one of Nvidia's new Grace CPUs over a high-speed NVLink interconnect. The Blackwell version is called the GB200 Grace Blackwell Superchip.

Nvidia is creating these "superchip" systems with technology from Arm Holdings for the Grace CPU and Mellanox technology, which Nvidia acquired in 2019, for the NVLink interconnect technology. Networking and interconnect technology is becoming increasingly important as more and more chips are stitched together into powerful AI systems, acting basically as extensions of transistors themselves by getting multiple chips to function as one unit.

On that note, the company isn't stopping at selling just one superchip. It's also developed a new 50-billion transistor NVLink networking chip that can stitch 72 GB200s together in a liquid-cooled rack system called the Nvidia GB200 NVL72 rack system.

Nvidia Blackwell server racks.

Image source: Nvidia.

While Blackwell is the star of the show, this NVLink chip may be the most important new feature. The new NVLink chip connects all of these GPU superchips over copper, not optical, connections.

That's an important feature, as optical connections require much more power. Since GPU supercomputers require huge amounts of electricity to run, cutting optical connections is a huge power saver and allows that extra power to be used for computing. That's a big deal.

The NVLink chip essentially gets all of these 72 superchips to function as a single system, or a massive GPU with supercomputer-like specs in a single server rack. The NVL72 system is capable of delivering 720 petaflops for training and 1.4 exaflops for inferencing, which is a stunning 30 times better than a similar number of Hopper chips strung together.

FLOPS is a unit of measurement in supercomputing which stands for floating-point operations per second -- essentially, a mathematical calculation. A petaflop is equal to one quadrillion FLOPS, and an exaflop signifies one quintillion. That's a lot of math!

Destroying Moore's Law

If you're looking for a reason Nvidia's stock is skyrocketing, just look at how it's been able to accelerate computing power in such a short amount of time. Aside from Nvidia's products and services, its stock price practically mirrors the exponential rise in raw computing power it has achieved.

When one considers the world's first exascale supercomputer was only built in 2022 and there are only two or three such systems today -- and Nvidia will essentially be able to deliver exascale power in a single Blackwell server rack later this year -- that's a pretty amazing leap. And when one considers that the human brain can only do about a single FLOP per second (as opposed to the Blackwell NVL72's quintillion), it's no wonder some think artificial intelligence systems will surpass human intelligence in the near future.

The most advanced generative AI model today is probably ChatGPT-4, built on 1.76 trillion parameters. But the new NVL72 Blackwell system can process a 27 trillion parameter model on its own. So investors can look forward to some truly stunning AI applications coming in the next few years as a result of these Blackwell systems.

In the CPU era, Moore's Law would say computing power would essentially double every one to two years, growing about 10 times in five years and 100 times in 10 years. But as Huang explained in the GTC presentation, Nvidia's GPU chipmaking prowess, software innovation, and networking tech all working together have been able to shatter that paradigm. Over the last eight years, Nvidia has increased the computing power of its energy-equivalent systems by a whopping 1,000 times!

Leaving out all the cool artificial intelligence, gaming, virtualization, and self-driving car applications, the sheer acceleration of computing power beyond Moore's Law is the central reason Nvidia has skyrocketed as much as it has. That's another reason why one day, it could potentially vault past the other Magnificent Seven stocks to become the most valuable company in the world.