The big story of 2023 has been the surging demand for artificial intelligence (AI). Just about every business is looking at ways to implement AI in their operations, while several big tech companies are pushing the boundaries of what AI can do.

But a challenge lies ahead for many companies pushing the forefront of AI development. It's increasingly expensive to build and train large language models. Not only do these AI models require a large number of very expensive chips, but the amount of energy these processes consume is staggering. One study found, for example, a simple ChatGPT prompt required 10 times the electricity as a standard Google Search. When multiplied by billions of uses, that has a huge impact on costs: both monetary and environmental.

And power demands are only getting bigger. "Reducing power consumption is becoming more important as AI power demand could easily triple in two years," Taiwan Semiconductor Manufacturing (TSM 1.26%) CEO C. C. Wei said at a recent conference. As such, in order for AI developers to keep pushing us forward, they need more efficient chips. And TSMC is likely the company that will bring that next big advancement to the market.

Here's what investors need to know.

A computer circuit board with the letters AI printed on it.

Image source: Getty Images.

What goes into faster and more efficient chips

While chip designers like Nvidia (NVDA 6.18%) get a lot of attention for their chips, it's important to understand what allows them to make such powerful chip designs in the first place.

The amount of processing power any individual chip has is largely determined by the number of transistors on a chip. More transistors means you can run more processes at once, giving you more computing power.

But there are limits to how big you can make a processor unit for use in a server (or laptop or smartphone). So, the only way to build chips with more transistors on them is to fit more transistors into the same amount of space. That's easier said than done.

The most advanced technology available uses a process called 3-nanometer. It's what Apple (AAPL -0.35%) used to fit 92 billion transistors onto its M3 Max MacBook Pro processor, which is approximately 400 square millimeters.

A silicon wafer printed with circuits.

Image source: Getty Images.

The next generation is a 2-nanometer process, which will increase performance 10% to 15% over 3nm and decrease power consumption 25% to 30%. TSMC is competing with Samsung and Intel to bring that process to market, ensuring they can print those chips economically with high reliability.

TSMC is planning to install tooling for its 2nm process in one facility in April. Volume manufacturing won't start until the second half of 2025, though. (Apple is likely the first customer for that process.) Intel expects to start producing its 2nm chips by the end of 2024. But getting a process in place is one thing, getting yields and pricing to a point where it makes sense for customers to switch their designs from their existing manufacturer is another.

Why TSMC will be the company to support the next generation of AI chips

Despite the competition, TSMC holds a couple of significant advantages when it comes to developing the next generation of chips.

First of all, it cannot be discounted that the only thing TSMC does is manufacture chips for other companies. Intel designs its own chips that may compete with other chip designers. Likewise, Samsung makes chips and electronic devices, so while it has a great built-in customer, it curbs its appeal for many other chipmakers.

Moreover, TSMC has a scale advantage. It accounted for 57.9% of chip-manufacturer revenue in the third quarter. Samsung, its next closest competitor, managed to attract just 12.4% of revenue.

That scale is important because creating the processes and tooling for the next generation of chips is becoming increasingly expensive. TSMC spent $1.6 billion in research and development last quarter. Few other chip manufacturers are even bringing in that much revenue each quarter.

TSMC can justify spending so heavily because it has existing relationships with huge companies like Apple and Nvidia. That creates a virtuous cycle whereby the tech giants buy from TSMC, TSMC reinvests some of that money into developing the next-generation process, and then it attracts even more customers as competitors struggle to catch up.

The stock is dirt cheap

Despite its position as a leading supplier of the most advanced AI chips in the market, TSMC's stock doesn't trade at the sky-high valuation of many of the well-known AI stocks.

Even after the stock's strong price performance since the start of November, shares still trade at less than 17x analysts' consensus estimate for 2024 earnings. Not only is that lower than the forward earnings multiple of the S&P 500 as a whole, it's well below TSMC's historical P/E ratio of around 21.5. So, if the stock merely sees multiple expansion toward its historical norm, it should trade substantially higher by the end of next year.

Considering the importance of TSMC for the future of AI development, it could see even better performance as demand for powerful and energy-efficient chips grows.