Nvidia (NVDA 1.03%) looks unstoppable. The company has just posted $57 billion in quarterly revenue, with its data center business growing at a 66% annual rate. CEO Jensen Huang also discussed $500 billion in chip demand visibility through 2026. With a market share of around 90% in artificial intelligence (AI) accelerators, Nvidia has become the default infrastructure provider for the generative AI era.
But Alphabet (GOOGL +1.26%) (GOOG +1.46%) has been quietly building an alternative. And it's starting to matter.
Image source: Getty Images.
A real competitor emerges
Alphabet began designing its own AI chips in 2013 -- years before ChatGPT made "AI" a household term. The Tensor Processing Unit (TPU) originated as an internal project designed to meet the computational demands of Google's Search and Translate services. Today, it has evolved into a commercial platform that directly competes with Nvidia's data center GPUs.
The latest generation, TPU v7 Ironwood, closely matches Nvidia's flagship Blackwell chips in raw compute power, as demonstrated in published benchmarks, while offering advantages in system-level efficiency for specific workloads. More importantly, Google Cloud now makes these chips available to external customers -- and some of the biggest names in AI are taking notice.
Nine of the top 10 AI labs now use Google Cloud infrastructure. Apple trained its foundation models for Apple Intelligence on clusters of 8,192 Google TPU v4 chips -- not Nvidia GPUs. Anthropic, the company behind Claude, recently secured access to up to 1 million Google TPUs through a multibillion-dollar partnership. Reports suggest that Meta Platforms is in talks to deploy Alphabet's TPUs alongside its own custom silicon as early as 2027.

NASDAQ: NVDA
Key Data Points
These high-profile deployments are significant because they demonstrate that the TPU platform is effective at scale. If Apple -- arguably the most demanding engineering organization in tech -- chose Alphabet's chips for its flagship AI initiative, the technology is enterprise-ready.
The economics of inference
The real threat to Nvidia isn't in training frontier models. That market requires the raw horsepower and flexibility that Nvidia's GPUs excel at. The threat is in inference -- actually running those models to serve billions of users.
Training is a capital expenditure. You do it once (or periodically) to create a model. Inference is an operational expenditure that runs constantly, and its costs compound as AI applications scale. By 2026, analysts expect inference revenue to surpass training revenue across the industry.
This is where Alphabet's vertical integration shines. Reports indicate that for certain large language model inference workloads, Google's latest TPUs can deliver up to 4 times better performance per dollar than Nvidia's H100. Midjourney, the popular AI image generator, reportedly cut its monthly inference costs by 65% after migrating from Nvidia GPUs to Google's TPU v6e pods.
For AI companies burning through venture capital, those savings aren't just efficient -- they're existential.
The software moat is shrinking
For two decades, Nvidia's real competitive advantage wasn't silicon -- it was software. The CUDA programming platform created massive switching costs. Researchers wrote code in CUDA, universities taught CUDA, and enterprises deployed on CUDA. Leaving meant rewriting everything.
That moat is eroding. Modern machine learning frameworks, such as PyTorch and JAX, increasingly abstract away the underlying hardware, allowing for more efficient and scalable computations. With PyTorch/XLA, developers can now run standard PyTorch models on TPUs with minimal code changes. That reduces the friction that once locked customers into Nvidia's ecosystem, even though CUDA still retains a larger and more mature developer community overall.
This doesn't mean CUDA is irrelevant. But it does mean customers can now evaluate chips primarily on price and performance rather than software compatibility -- a shift that favors Alphabet's cost-optimized approach.
What it means for investors
Nvidia isn't going anywhere. The company will likely dominate model training for years, and its financial results reflect genuine, durable demand. However, the era of unchecked pricing power may be coming to an end.

NASDAQ: GOOG
Key Data Points
The clearest evidence: According to a recent industry analysis, OpenAI secured roughly a 30% discount on its latest Nvidia hardware order by raising the credible option of shifting more workloads to alternative hardware, such as Alphabet's TPUs. Even when customers stay with Nvidia, Alphabet's presence caps what Nvidia can charge.
For Nvidia shareholders, this suggests margins may face pressure as competition intensifies. For Alphabet shareholders, it highlights an underappreciated growth driver. Google Cloud revenue jumped 34% last quarter to $15.2 billion, with AI infrastructure demand -- including TPUs -- cited as a key driver. The cloud backlog surged 82% year over year to $155 billion.
Alphabet won't dethrone Nvidia overnight. But it has successfully positioned the TPU as the industry's credible second option -- and in a market this large, second place is worth hundreds of billions.





