At Computex in Taiwan last weekend, artificial intelligence (AI) GPU leader Nvidia (NVDA -1.82%) made a very interesting announcement: It will open up its NVLink technology to other chipsets besides those made by Nvidia, in a product called NVLink Fusion.

The move was certainly interesting and can be interpreted both positively and negatively. Delving into the details, is the move a good or bad sign for shareholders?

What is NVLink?

AI processing is much more than just one chip's work today. Rather, Nvidia CEO Jensen Huang thinks whole servers or even complete racks can function as one "giant chip." For instance, Nvidia just began selling GB200 NVL72 racks, which contain 72 Blackwell GPUs and 36 Grace CPUs all linked together through Nvidia's in-house networking technologies.

Those Nvidia networking technologies are essentially two product families: NVLink and Infiniband. NVLink is an interconnect technology that links GPUs within a server, whereas Infiniband links whole servers together.

What Nvidia is doing

This past weekend, Nvidia announced NVLink Fusion, enabling system manufacturers to integrate NVLink technology in servers that don't contain Nvidia chips.

For instance, NVLink Fusion can be used to connect custom ASICs that large cloud companies are designing for themselves. Marvell Technologies (MRVL -2.10%) is named by Nvidia as a key partner in the venture. Marvell not only makes custom AI accelerator pieces for the cloud giants' chips, such as Amazon's Trainium, but also produces various data center interconnect technologies as well.

In addition, NVLink Fusion allows for Nvidia GPUs to connect to non-Grace CPUs. For instance, Fujitsu and Qualcomm are attempting to break into the data center CPU segment, and they have also partnered with Nvidia for NVLink Fusion, per the press release, hoping to be included in more Nvidia AI systems.

Other partners mentioned in the press release were the electronic design automation software powerhouses Synopsis and Cadence Design Systems, which help chipmakers design chipsets and whole systems, along with other data center connectivity companies.

Server racks lit up in blue.

Image source: Getty Images.

Why is Nvidia doing this?

Some may wonder why Nvidia is essentially opening up its NVLink technology to others, given that having a "closed" ecosystem would force customers to buy full Nvidia solutions with Nvidia GPUs and CPUs. So why is Nvidia being so "nice" and "open"?

Nvidia already sells a lot of GPUs into systems that don't use its NVLink technology or Infiniband, and recent results show its networking equipment sales badly lagging chip sales. Last quarter, while Nvidia's data center semiconductor revenue was up 116%, its data center networking revenue was actually down 9%.

The reason is probably competition from legacy networking technologies that are being updated by the industry for AI. For instance, last year, a consortium of major technology giants teamed up to create UALink, an open-source alternative to NVLink. This comes on top of the Ultra Ethernet Consortium, formed in 2023 by these same companies, which created an ethernet-based alternative to go against Nvidia's Infiniband -- even in Nvidia GPU-based systems.

With its networking technologies not being adopted to the extent its GPUs are, Nvidia may be trying to expand its networking addressable market to non-Nvidia systems to boost sales.

It may also be a defensive move

In addition to expanding the market for networking, Nvidia may also be acknowledging increased adoption of custom-designed AI accelerators versus Nvidia chips. While Nvidia GPUs aren't going away, keep in mind that Nvidia makes roughly a 75% gross margin on extremely expensive chips. When a cloud company designs its own custom AI accelerator, it only has to pay the gross costs to the foundry. So that means cloud companies pay just 25% or so of the cost of an Nvidia chip when they make their own custom chips.

While there are extra research and development costs associated with designing one's own chips, because AI accelerators are so expensive and deployed in massive numbers for the latest AI systems, the gross costs vastly outweigh those extra R&D design costs.

As a result, ASICs are on the rise. According to The Information Network, the market share for custom ASICs has increased from about 22% in 2023 to a projected 30% in 2025, with GPUs ceding market share from about 72% in 2023 to a projected 65% this year. CPUs and FPGAs make up the remaining 5% or so of AI workloads, according to the analyst.

Now, Nvidia should still grow, as the overall AI infrastructure market is still growing by leaps and bounds. However, Nvidia may see a continued market share loss to lower-cost custom ASICs over time. By opening up its NVLink technology, it may therefore still find a way into these cloud giants' systems, leading to incremental revenue opportunities even if ASICs take more share from GPUs.

What it means for shareholders

Overall, it's probably a good idea for Nvidia to try and gain exposure to cloud giants' ASIC systems, as those systems are likely to gain share over time. However, the new NVLink Fusion offering may serve as a warning to those who anticipate hypergrowth for Nvidia GPUs for a long period, or at least continued hypergrowth at these same margins.

While Nvidia does have a first-mover and software advantage currently, cloud giants have the means to invest in their own silicon, and will continue to do so as long as they're able to save tens of billions of dollars doing it. It looks like Nvidia may be acknowledging this reality with NVLink Fusion.