The boom in data traffic is creating the need for stronger cloud infrastructure, forcing cloud service providers (CSPs) to upgrade their data centers to process the increasing volume and complexity of data. Gartner estimates that public cloud infrastructure investments will grow almost 37% this year, to $34.6 billion, thanks to investments from Amazon, Microsoft (NASDAQ:MSFT), and Alphabet's (NASDAQ:GOOG) (NASDAQ:GOOGL) Google to bring their data centers up to speed.
One way to tap this multibillion-dollar market is through NVIDIA (NASDAQ:NVDA), as its Tesla graphics processing units (GPUs) are helping the CSPs bring raw computing power to their data centers. As it turns out, the chipmaker's data center business has taken off remarkably in recent quarters, outpacing the growth in other key segments.
In fact, the data center business is now NVIDIA's second-largest revenue source, and it is likely that it will continue to get bigger given the company's recent moves in this space.
NVIDIA is gaining from GPU adoption in the cloud
A huge amount of data is being generated thanks to connected cars and other Internet of Things devices. Intel CEO Brian Krzanich believes that a single autonomous car could generate 4,000 gigabytes of data daily if driven for just one hour, thanks to all the sensors and cameras installed. Machina Research, on the other hand, forecasts that the number of connected devices will grow over fourfold, to 27 billion, by 2025.
This is going to create a big challenge for CSPs in transferring the data from the cloud to the data center, with Gartner forecasting that 25% of the data generated will go to waste even before it is analyzed. NVIDIA, however, has a solution for the CSPs with its Tesla GPUs that are specifically aimed at this market.
The Tesla GPUs accelerate a data center's computing power, enabling service providers to process huge data sets at a fraction of the cost required to build a new facility, which would previously have been required to analyze the same amount of data. NVIDIA does this by allowing service providers to replace central processing units with its high-performance GPUs that can deliver a fivefold boost in computational power and slash costs by 60%.
Not surprisingly, Alphabet has decided to use the chipmaker's Tesla K80 GPUs on the Google Cloud Platform to offer deep-learning capabilities to its users. Google Cloud users can now use up to eight GPUs to perform intensive tasks such as high-performance data analysis, seismic analysis, and video transcoding.
Google is going to charge $0.70 per hour for each K80 GPU in the U.S. and $0.77 per hour in Asia and Europe, allowing users to carry out deep-learning operations without any significant capital investment. What's more, Tencent has also decided to use NVIDIA's Tesla P100 and P40 GPU accelerators to give artificial intelligence (AI) computing capabilities to its enterprise customers, giving the chipmaker access to China's booming public cloud-infrastructure market.
NVIDIA is targeting AI cloud computing
AI cloud computing is fast gaining traction thanks to the advent of the Internet of Things and connected cars that require autonomous decision making based on data generated, and NVIDIA does not want to miss the gravy train. The chipmaker recently partnered with Microsoft to bring to market a hyperscale GPU accelerator aimed at AI cloud computing applications.
NVIDIA and Microsoft expect their framework -- the HGX-1 -- to be used in healthcare, self-driving cars, and voice recognition, among others, as it is positioned as the standard architecture for AI cloud computing. The HGX-1 is based on an open-source, modular design, indicating that it is scalable in nature since it comes with eight Tesla GPUs that can connect to the central processing unit depending on the workload.
NVIDIA has made a smart move by tying up with Microsoft to develop an AI cloud computing solution as the latter's Azure cloud service is growing at a faster pace than the market leader. Amazon Web Services -- the e-commerce giant's cloud computing services subsidiary -- is currently the market leader in this space, but Microsoft's Azure cloud could overtake it in a couple of years according to a Morgan Stanley survey.
In all, NVIDIA's data center business has a lot of room for growth as the company's GPUs will play a mission-critical role in cloud computing thanks to the chipmaker's deep-learning and AI capabilities. More important, the company is partnering with the key players to sustain its rapid growth in this space.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Harsh Chauhan has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Amazon, and Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.