Facebook (NASDAQ:FB) is using Nvidia's (NASDAQ:NVDA) new Tesla M40 GPUs in Big Sur, a computing platform designed for AI and machine learning tasks. Nvidia says that it worked with Facebook to ensure that Big Sur will deliver "maximum performance for machine learning workloads, including the training of large neural networks across multiple Tesla GPUs."

This decision highlights an interesting market opportunity for Nvidia, the largest manufacturer of add-in graphics boards in the world. It also demonstrates the effectiveness of GPUs in machine learning tasks when compared to traditional CPUs. Will Facebook's vote of confidence in Nvidia's GPUs attract the attention of other players in the machine learning market?

Image

Facebook's "Big Sur" platform. Source: Facebook.

Why is Facebook investing in AI?
Facebook has been investing heavily in machine learning to help translate content, detect faces in photos and videos, and improve its core news feed algorithm by analyzing text. Facebook's upcoming M assistant, its answer to Siri and Cortana, uses machine learning to comb through the social network's massive database. The better Facebook's AI can identify people, objects, and places in user content, the more accurate its targeted ads can be.

That's the same reason Alphabet (NASDAQ:GOOG) (NASDAQ:GOOGL) launched its Photos app, which uses AI to organize user photos by person, object, location, and other traits. Google also uses its cloud-based machine learning service to predict search queries across its ecosystem. As tech giants race to create the most efficient machine learning and predictive analytics systems, demand for powerful machine learning platforms like Big Sur will rise.

Why is Nvidia the top choice?
In the past, GPUs were 10 to 20 times faster than comparable CPUs at performing "deep learning" tasks which required image recognition and algorithmic calculations. Intel (NASDAQ:INTC) is the largest CPU and GPU maker in the world, but the majority of its GPUs are lower-powered integrated solutions for desktops and laptops.

Earlier this year, Intel claimed that its new Xeon Phi processors could offer similar performance to Nvidia's Tesla GPUs in performing deep learning tasks. However, Nvidia claims that in a matchup between its Tesla K80 GPUs and Intel's Xeon Phi 7120 CPUs, the Tesla GPU was "two to five times" faster in running "key science applications" than the Xeon Phi. Nvidia also noted that while programming requirements for the Tesla and Xeon Phi are similar, "the results are significantly better on a GPU."

Nvidia also makes it easier for developers to create deep learning apps with CUDA (Compute Unified Device Architecture), its own parallel computing platform and API model. CUDA works with common programming languages like C++, and gives developers direct access to the GPU. AMD, Nvidia's main rival in high-end GPUs, uses OpenCL (Open Computing Language), which lacks CUDA's deep learning libraries.

Nvidia's plans for growth
Nvidia's interest in deep learning and AI expands far beyond Facebook's data centers. Back in March, it launched a $10,000 driverless car development platform called Drive PX, which is powered by two Tegra X1 processors. It also unveiled Digits DevBox, a $15,000 deep learning "mini-supercomputer" powered by four Titan X GPUs.

Image

Nvidia's Digits DevBox. Source: Nvidia.

In November, drone market leader DJI Innovations launched the Manifold, a $499 Tegra K1-powered computer for drones. Last month, Nvidia unveiled the Jetson TX1, a "deep learning" module for drones and driverless cars which runs on its Tegra X1. Nvidia will start selling the Jetson TX1 next year for $299.

Since Tegra SoCs merge an ARM-licensed CPU with Nvidia's own GPUs, the Jetson can help drones and driverless cars process what they "see" and react accordingly. That's why Intel recently combined its RealSense depth-sensing cameras with its Atom processors in its new SoCs for drones. Stronger Tegra sales across these markets can offset the SoC's weakness in smartphones and tablets. Last quarter, Tegra sales fell 23% annually and accounted for less than 10% of Nvidia's top line.

Meanwhile, Nvidia expects to deploy its high-end Tesla GPUs in supercomputers and other high-end systems. By aiming Tegra SoCs at the mainstream market and the Tesla GPUs at the high-end market, Nvidia can cast a wide net over the fledgling AI and machine learning market.

The key takeaway
With Tegra and Tesla, Nvidia is leveraging its market-leading position in high-end GPUs to establish a firm foothold in the machine learning market. These investments might not generate meaningful revenue over the next few quarters, but they will likely pay off once the machine learning battle heats up, drones lift off, and driverless cars hit public roads.

Leo Sun has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A and C shares) and Facebook. The Motley Fool recommends Intel and Nvidia. Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.