Most artificial intelligence (AI) models are trained and then deployed in data centers, which are filled with thousands of specialized chips called graphics processing units (GPUs). Most AI developers don't have the financial resources to build that infrastructure themselves, but they can rent it from a handful of technology giants that operate hundreds of centralized data centers all over the world.
Those tech giants typically buy most of their GPUs from Nvidia (NVDA -0.41%), which supplies the best AI hardware in the industry. The chipmaker continues to experience more demand than it can fill, which is driving a surge in its revenue and earnings. In fact, Nvidia has added a staggering $3 trillion to its market capitalization since the beginning of 2023, and it's now the second most valuable company in the world.
However, the fact that only a handful of companies can afford to build the best AI infrastructure isn't a good thing for Nvidia. During the fiscal 2026 first quarter (ended April 27), more than half of the company's total revenue came from just four unnamed customers, which means a pullback in AI infrastructure spending from any one of them could threaten the chip giant's incredible run of growth.
Let's take a look at who those top customers might be, so we can assess the sustainability of Nvidia's data center business.

Image source: Getty Images.
Nvidia's revenue is highly concentrated
Nvidia generated $44.1 billion in total revenue during the fiscal 2026 first quarter. The data center segment was responsible for $39.1 billion of that figure, so AI GPUs are now the company's most important product by far.
While Nvidia doesn't disclose who its customers are, it does report some data on the concentration of its revenue base. During the first quarter, just four mystery customers alone accounted for 54% of the company's $44.1 billion in sales:
Customer |
Proportion of Nvidia's Q1 Revenue |
---|---|
Customer A |
16% |
Customer B |
14% |
Customer C |
13% |
Customer D |
11% |
Data source: Nvidia.
That means Customer A spent around $7 billion with Nvidia during the first quarter, and there are only a handful of companies in the world with enough financial resources to keep that up. As I mentioned earlier, this creates a risk for Nvidia because if Customer A were to reduce its capital expenditures, it would be very hard for the chipmaker to replace that revenue.
Who are Nvidia's mystery customers?
It's impossible to identify Nvidia's top customers with certainty, but we can make some pretty reasonable assumptions based on public forecasts issued by some of the world's biggest tech companies:
- Amazon (AMZN -0.54%) said it will spend around $105 billion on AI data center infrastructure this calendar year.
- Microsoft (MSFT -0.27%) said it is on track to spend over $80 billion on AI infrastructure during its fiscal year 2025 (which ends on June 30).
- Alphabet (GOOG -0.42%) (GOOGL -0.50%) plans to spend $75 billion on AI infrastructure this calendar year.
- Meta Platforms (META -0.69%) says it will spend up to $72 billion to fuel its AI ambitions this year (a figure it recently increased from $65 billion).
Several other AI companies have smaller -- but not insignificant -- capital investments in the pipeline. Oracle, for example, recently told investors it will increase its data center spending to over $25 billion during its fiscal year 2026 (which just began on June 1). Then there are top AI start-ups like OpenAI, Anthropic, and Elon Musk's xAI, which also have very deep pockets.
While all of the above companies are developing AI for their own purposes, Amazon, Microsoft, and Alphabet are also three of the world's largest providers of cloud services. In other words, they build the centralized data centers I mentioned earlier, which they rent to AI developers for a profit.
A potential $1 trillion annual opportunity
Despite the exorbitant amount of AI infrastructure spending on the table this year, Nvidia CEO Jensen Huang thinks this is just the beginning. He predicts capital expenditures could top $1 trillion per year by 2028, because every new generation of AI models requires more computing capacity than the last.
For example, Huang says some of the newest "reasoning" models consume up to 1,000 times more computing capacity than their predecessors. These models spend time "thinking" in the background before rendering responses, ensuring they produce more accurate information than traditional large language models (LLMs), which generate fast, one-shot responses.
Nvidia's Blackwell and Blackwell Ultra GPU architectures were designed to meet the growing demand for inference capacity from reasoning models, which is why chips like the GB200 and GB300 are the most sought-after in the world.
If Huang is right about the trajectory of AI infrastructure spending, then the risks associated with Nvidia's highly concentrated revenue probably won't materialize for at least a few more years. Since Nvidia stock is trading at a relatively attractive valuation right now, those potential risks probably shouldn't keep investors from buying it right now.