Ark Invest's Cathie Wood is a polarizing figure among investors. Wood's supporters praise her for boldly adding growth stocks to Ark's exchange-traded funds (ETFs), but her critics claim she ignores valuations, sells winners too early, and holds losers too long.
Ark sold most of its Nvidia (NVDA +1.60%) shares in 2022 and 2023, right before the AI boom drove its stock to record highs. Over the past three years, its stock has surged more than 950%, while Wood's flagship Ark Innovation ETF (ARKK 1.86%) has risen about 120%. Over the past year, Wood gradually rebuilt a smaller stake in Nvidia at much higher prices.
Image source: Getty Images.
That messy trade suggests that we should take Wood's predictions with a grain of salt. That said, many growth-oriented investors still closely follow Wood's claims.
In Ark's latest "Big Ideas 2026" report, Wood warns that Nvidia will face tougher competition from AMD (AMD +2.29%) in the AI market this year. Should investors expect the underdog to catch up to Nvidia, or is Wood glossing over some of Nvidia's key competitive advantages?
What are Nvidia's core strengths?
Nvidia once mainly sold its discrete GPUs for high-end gaming PCs. Yet over the past two decades, it has gradually expanded into the data center market by launching more powerful GPUs for processing machine learning and AI tasks. Unlike CPUs, which are optimized for sequential tasks, GPUs are designed to execute parallel tasks -- which allow them to simultaneously process large volumes of integers and floating-point operations.

NASDAQ: NVDA
Key Data Points
As the AI market expanded, Nvidia reinforced its first-mover advantage with smaller, denser, and more power-efficient chip architectures: Turing (2019), Ampere (2020), Hopper (2022), and Blackwell (2024). Nvidia also locked in its data center customers into CUDA (Compute Unified Device Architecture), a proprietary programming platform optimized for its own chips. The stickiness of that ecosystem significantly widens its moat against other AI chipmakers.
Nvidia controlled 92% of the discrete GPU market in 2025, according to Carbon Credits, while AMD held an 8% share. Most of the world's leading AI companies -- including Microsoft, OpenAI, Alphabet's Google, and Meta Platforms -- use its GPUs to power their latest AI applications.
Nvidia's data center GPUs are expensive, starting at about $25,000 for an individual H100 chip, but its "best in breed" reputation gives it plenty of pricing power against its cheaper competitors. As competition in the AI software market heats up, most companies will likely stick with Nvidia's trusted chips rather than try more affordable or less established alternatives.
Why does Wood think AMD has a chance?
AMD generates nearly half of its revenue from its data center business, which sells Epyc CPUs and Instinct GPUs for servers. That segment consistently generated double-digit revenue growth over the past year, even as it faced tighter export curbs in China, macro headwinds for enterprise spending, and intense competition from Nvidia.
AMD sells its Epyc CPUs and Instinct GPUs as cheaper alternatives to Intel's Xeon CPUs and Nvidia's A100, H100, and H200 GPUs. The Instinct MI300X, which competes against Nvidia's H100, costs about $15,000. Many of Nvidia's top customers -- including Microsoft, OpenAI, and Meta -- are already using AMD's chips.

NASDAQ: AMD
Key Data Points
Yet it isn't a binary choice: those tech giants can install both types of chips across their expanding data centers. Therefore, there could be plenty of room for Nvidia and AMD to grow without trampling on each other. Moreover, Nvidia's aforementioned advantages -- especially CUDA -- should still make it the default choice for supporting high-end AI applications.
AMD is also a more diversified chipmaker than Nvidia. It also sells CPUs, APUs (which merge CPUs and GPUs on a single chip), and embedded chips for other markets. Spreading its resources across those other businesses could dilute its data center investments, making it harder to match Nvidia's scale and technological advantages.
Both stocks could still be solid AI investments
AMD's data center business is growing, but I wouldn't consider it a significant threat to Nvidia yet. Instead, both companies could still be great long-term plays on the global AI market -- which Grand View Research predicts will grow at a CAGR of 30.6% from 2026 to 2033.
From fiscal 2025 (which ended last January) to fiscal 2028, analysts expect Nvidia's revenue and earnings per share (EPS) to grow at a CAGR of 47% and 45%, respectively. Its stock still looks reasonably valued at 26 times next year's earnings -- and it still has plenty of upside potential regardless of what Cathie Wood or other cautious pundits believe.











