For the better part of the last three years, the arrival of artificial intelligence (AI) is all Wall Street and everyday investors have cared about. But there's a good reason for that.

Empowering software and systems with the capacity to make split-second decisions and become more efficient at their assigned tasks is a game-changing technology that the analysts at PwC foresee boosting global gross domestic product $15.7 trillion by 2030. Even if PwC is only somewhat in the ballpark with its estimate, it would lead to dozens or hundreds of public companies being enormous winners of the AI revolution.

But it's no secret that Nvidia (NVDA 2.05%) has been Wall Street's AI darling and shining star. Its graphics processing units (GPUs) are the brains powering decision-making, generative AI solutions, and large language model (LLM) training in AI-accelerated data centers. With demand for AI-GPUs overwhelming their supply by a considerable amount, Nvidia has enjoyed exceptional pricing power for its Hopper (H100), Blackwell, and Blackwell Ultra GPUs.

A New York Stock Exchange floor trader looking up in amazement at a computer monitor.

Image source: Getty Images.

Nvidia's strengthening foundation has also been propped up by its CUDA software platform. This is the toolkit developers use to maximize the compute capabilities of their Nvidia hardware, as well as to build and train LLMs. Think of CUDA as the hook that keeps buyers of Hopper, Blackwell, and Blackwell Ultra chips loyal to Nvidia's ecosystem of products and services.

Yet in spite of these competitive edges and Nvidia's knack for trouncing Wall Street's sales and profit expectations, one Wall Street analyst recently reduced his and firm's price target on Wall Street's most-beloved AI stock.

One Wall Street analyst is tempering expectations

Almost every one of the 63 analysts covering Nvidia in September have a positive view of the company and its stock. A combined 58 analysts rate it as some form of strong buy or buy equivalent, compared to four holds/market performs and one sell rating.

Citigroup analyst Atif Malik falls into the majority. He and his firm hold a buy rating on Nvidia stock and expect robust growth in AI-GPUs going forward. But he's not quite as bullish as he used to be.

Not long after Nvidia lifted the hood on its fiscal second-quarter operating results, Malik took the road less traveled on Wall Street and reduced his firm's price target for Nvidia by $10 per share to $200 from $210. On a percentage basis, this isn't a groundbreaking change. But in terms of market cap, a $10-per-share reduction in price target equates to nearly a quarter of a trillion dollars!

Malik's "why?" is simple: competition.

In the note he released on behalf of Citi, Malik pointed to the rise of custom chips as a potential threat to Nvidia's, thus far, monopoly like hold of AI hardware being used in high-compute enterprise data centers. He specifically pointed to a relatively new member of the trillion-dollar club, AI-networking specialist Broadcom (AVGO -2.01%), as the catalyst for this price target cut.

When Broadcom delivered its quarterly operating results, it revealed a $10 billion order from an unnamed customer (speculated to be privately held OpenAI by some Wall Street analysts) for its next-generation custom accelerating chips known as XPUs. Malik anticipates that XPUs will enjoy 53% growth in 2026 compared to the current year, while AI-GPU growth will "slow" to 34% next year.

While the latter is still a highly impressive figure, Malik believes Broadcom and other companies may begin to chip away (pun fully intended) at Nvidia's dominance and slow its growth potential -- thus the need to temper expectations.

An engineer checking wires and switches on an enterprise data center server tower.

Image source: Getty Images.

Right risk, wrong threat

While some on Wall Street, like Atif Malik, speculate about businesses shifting their infrastructure workloads to custom chips, I'd argue the greatest threat to Nvidia's biggest competitive advantage can be found within.

While no external competitor has come close to rivaling the compute capabilities of Nvidia's hardware -- and CEO Jensen Huang plans to keep it this way with an aggressive innovation timeline -- the company's biggest competitive edge has always been scarcity.

Even with global No. 1 chip fabricator Taiwan Semiconductor Manufacturing rapidly expanding its chip-on-wafer-on-substrate capacity, the supply of AI-GPUs has come nowhere close to satiating demand.

The law of supply and demand in economics is pretty straightforward in this situation: the price of AI-GPUs will climb until demand tapers. Nvidia has had no trouble netting $40,000 or more for its chips, which has, in some instances, represented a 100% to 300% premium compared to the price rivals are selling their AI-GPUs.

The threat for Nvidia is that this AI-GPU scarcity is going to be torn away by its top customers, based on net sales. Many of these leading customers -- mostly members of the "Magnificent Seven" -- are internally developing AI-GPUs or solutions to use in their data centers. Though these chips aren't going to be sold externally, and they're no match for the compute potential of Hopper, Blackwell, or Blackwell Ultra, they do possess the advantage of being significantly cheaper than Nvidia's hardware and aren't backordered.

If members of the Mag-7 begin turning to their internally developed chips on a complementary basis, it means less available real estate for Nvidia's hardware to occupy. It also reduces the scarcity that's helped fuel Nvidia's pricing power and gross margin.

Furthermore, internally developed chips from Nvidia's top clients might delay upgrade cycles, especially if Huang's aggressive innovation cycle rapidly depreciates the value of prior-generation AI-GPUs.

While it's possible custom chipmakers like Broadcom steal some of the spotlight from Nvidia in the coming quarters, the biggest threat is likely to come from its own top customers by net sales.