Artificial intelligence (AI) is all the rage lately. Human-like interactions from the latest generation of chatbots -- powered by generative AI systems -- has peaked public interest. Microsoft raised eyebrows when it confirmed a $10 billion investment in ChatGPT-creator OpenAI and its plans to integrate the system with its Bing search engine. Alphabet's (GOOGL 1.68%) (GOOG 1.75%) Google helped fueled the fervor with the debut of its own chatbot, Bard.

Nvidia (NVDA 3.34%) has long been the gold standard semiconductor used for training AI systems, but being No. 1 invites and even encourages competition, as rivals are always looking to topple the leader.

Now Google is claiming to have wrested the title from Nvidia, but things are a bit more complicated.

People in a factory assembling semiconductor components.

Image source: Getty Images.

An audacious claim

In a scientific paper released Tuesday, researchers at Google claimed that a supercomputer powered by its latest tensor processing unit (TPU) -- a processor specifically designed to train AI models -- outperformed a system powered by comparable Nvidia chips. 

The process involves feeding massive amounts of information through these systems to train them for specific tasks, including providing more human-like responses to questions. Because of the enormous datasets used to train these AI models, researchers strung together more than 4,000 TPUs to create a supercomputer, while using specially designed switches to disburse the workload across the massive chipsets.

Google said that in its tests, its 4th generation TPU, which the company uses to train its own AI systems, were as much as 1.7 times faster and 1.9 times more power efficient than Nvidia's comparable A100 Tensor Core GPU. 

This would seem to suggest that Google has succeeded in knocking Nvidia off its lofty perch, which would be of great concern to Nvidia shareholders -- but the devil is in the details and the results aren't as cut and dry as you might expect.

Not apples-to-apples

When technology moves at the speed of sound, the latest news is often outdated before the ink is dry and that appears to be the case here.

Early last year, Nvidia announced the development of the H100 "Hopper" chip, which the company said would be nine times faster for training AI models and up to 30 times faster for inference -- the process of using the models after they've been trained -- when compared with its A100 chip. The company also said the H100 was the most "power-efficient Nvidia GPU to date." This suggests that Nvidia's H100 processor likely outperforms Google's latest TPU across the board.

Google disclosed that it didn't compare its 4th-Gen TPU to the H100 because Nvidia's latest flagship processor hit the market after the research was completed. But the company did suggest that additional upgrades were on the drawing board, noting that Google has "a healthy pipeline of future chips."  

What these developments mean for investors

It's important for investors to put these latest developments into historical context. This isn't the first time Google has claimed its AI-centric processor was better than a comparable one by Nvidia, only to be outdone by Nvidia's latest innovations.

In its fiscal 2023 (which ended Jan. 29), Nvidia spent $7.34 billion on research and development (R&D) expenses, up 27% compared to fiscal 2022. Not only that, R&D costs last year represented 27% of Nvidia's revenue, up from 20% in the prior year. 

This helps illustrate why Nvidia has been the king of the AI-processor hill for so long -- its breakneck pace of innovation. The company's GPUs continue to be the gold standard for training and inference in AI systems. Nvidia has consistently demonstrated its ability to stay one step ahead of all comers, even those with massive size and gargantuan budgets.

That's not to say Alphabet isn't also a worthwhile investment -- which is why I own both.