The company formerly known as Facebook, Meta Platforms (META -0.52%), just unveiled details for its new AI Research SuperCluster (RSC). It's a massive supercomputer housed in a data center, and once it's fully constructed in mid-2022, it will be the world's most powerful (at least as far as we know, based on publicly announced supercomputer projects).

It's no surprise this was a joint press release with Nvidia (NVDA -3.33%). The GPU designer has built itself into the leader in artificial intelligence (AI) hardware technology and has partnered with Meta in the past. This time is no different because it's Nvidia's latest hardware that will power Meta's RSC.

Two researchers working on semiconductor equipment.

Image source: Getty Images.

Building the foundation of the metaverse

Meta's new facility will be truly impressive. Outfitted with 760 NVIDIA DGX A100 systems (a computing unit Nvidia purpose-built for high-end computing in data centers), it crams 6,080 Nvidia GPUs into the supercluster, all linked together using the networking hardware Nvidia acquired through its takeover of Mellanox a couple of years ago.  

Currently, the RSC delivers a maximum of nearly 1.9 exaflops of performance (an exaflop is 1,000 petaflops, supercomputing jargon for how fast the computer can make calculations). However, a second phase of construction later this year will expand the RSC to 16,000 GPUs and increase computing power to as much as 5 exaflops. Not sure what that means? Here's a reference point. The Fugaku supercomputer in Japan, regarded as the fastest in the world right at the moment, has a max performance level of 2.15 exaflops.

When Facebook rebranded to Meta Platforms late last year, it said it was serious about building the metaverse, and backed it up with a pledge to spend $10 billion starting this year (and going up from there). The RSC clearly makes good on that initial promise. Meta says it will use its Nvidia-powered machine to research and build services using computer vision (giving a computer the ability to "see" and recognize its surroundings) and natural language processing and recognition (like the ability to have a conversation with a device, or converse with groups of people who speak different languages in real-time).  

A boon for chip companies

The beauty of Nvidia's hardware custom-designed to process massive amounts of data isn't just the equipment itself. Nvidia also bundles a huge portfolio of software with its DGX A100 system aimed at simplifying complex AI research and development. Among Nvidia's portfolio is software for image recognition, recommendation systems, and conversational AI, all ready to deploy so a developer can start experimenting and creating a new software service.

Historically, supercomputers like the RSC were the realm of the academic world, but work from companies like Nvidia have made AI more approachable than ever. A sort of AI supremacy race might be starting. Expect to see more mega-supercomputer announcements like Meta's in the years ahead as other large companies try to build out the metaverse (that is, future web-based computing platforms and services) and capture share of the ever-expanding IT industry.  

What does that mean for Nvidia? Though the semiconductor industry is cyclical (periods of booming sales are often reset by periods of slowing or briefly declining sales), Nvidia's overall trajectory is likely to continue rising for many years to come. This bodes well for other chip designers that play in this growing AI sandbox too, like Advanced Micro Devices -- which, by the way, currently supplies Nvidia with a pair of CPUs for every DGX A100 data center system built.  

Long story short, it looks like Meta is betting big on developing the future of computing with the RSC, and Nvidia is playing a key role in this process. Data center sales have been a primary contributor to Nvidia's rapid growth over the last few years, and that trend looks far from finished. It underscores the momentum Nvidia has behind it as the AI computing arms race heats up.