At the 2015 Consumer Electronics Show, graphics chip designer NVIDIA (NASDAQ:NVDA) unveiled the first-generation of its Drive PX platform for automobiles, aimed at enabling advanced driver assistance features and, eventually, autonomous driving. The hardware was impressive, featuring two of the company's Tegra X1 SoCs, and more than 50 automakers, suppliers, developers, and research institutions have used the platform for autonomous-driving research and development.

Drive PX is a deep learning platform. Software is trained on a supercomputer by feeding it enormous amounts of data, and that software is then run on the Drive PX system within a car to identify objects such as other cars, pedestrians, and road signs.

The second-generation product, unimaginatively called Drive PX 2, was announced at the 2016 Consumer Electronics Show. Drive PX 2 is vastly more powerful than the first-generation product, with NVIDIA stating that the new platform has more than 10 times the computational horsepower than the original Drive PX. Drive PX 2 consists of two next-generation Tegra processors, as well as two discrete GPUs based on the upcoming Pascal architecture. The system is capable of 8 trillion single-precision floating point calculations per second, or 24 trillion deep learning operations per second, according to NVIDIA.

Drive Px

Drive PX 2 system. Source: NVIDIA.

In addition to analyzing input from as many as 12 cameras, Drive PX 2 supports lidar, radar, and ultrasonic sensors. The system can combine this data to more accurately detect objects, calculating an optimal route taking into account all of the objects in the environment.

Is a supercomputer really necessary?
The Drive PX 2 is essentially a supercomputer. At 8 TFLOPS of raw computational power, the platform is about 60 times more powerful, at least theoretically, than Intel's i7 6700K desktop CPU. But why does a car need so much computational horsepower?

When I'm driving a car, I can keep track of at most a few cars around me at any given time. Navigating an all-way stop intersection with one lane going in each direction is easy; there are only three cars to keep track of. Two lanes in each direction, though, doubles the number of cars to six, making for a far more stressful and mentally taxing situation. If you use your phone while driving, as many people seem to do, your ability to keep track of what's going on around you deteriorates dramatically from this already low level.

Computers don't have the same problem. With enough computational power, a computer can detect and track dozens, hundreds, or even thousands of objects in real time, including other cars and pedestrians. This requires some serious hardware to deal with the massive amount of data coming from the various cameras and sensors, but the Drive PX 2 system is up to the task, able to recognize up to 2,800 images per second.

Detecting objects is only half the battle. Once the car has determined the objects within its environment, it needs to generate a plan based on this information. This process requires understanding its environment, and that's a hard problem. Earlier this year, one of Google's self-driving cars being tested in Austin, Texas, became befuddled when a bicyclist on a fixed-gear bike did a track-stand at an intersection. The car lurched forward and abruptly stopped multiple times, never making it past the middle of the intersection, according the cyclist. The car detected the bicycle, but it was unable to understand what was happening.

Edge Cases Nvidia

Source: NVIDIA.

This is a great illustration of why so much computational power is necessary for a self-driving car to be able to act reasonably. There are so many edge cases, such as stop signs obstructed by trees and track-standing bicyclists, that any self-driving car system needs to be both extremely powerful and trained with an enormous amount of data.

Software and competition
Software is also important, and at CES NVIDIA announced a new software development kit called DriveWorks, aimed at making it easier for developers to build applications on top of the Drive PX platform. This is similar to what NVIDIA has done with GameWorks, a set of tools and libraries that allow game developers to easily include advanced visual effects in their games without needing to reinvent the wheel. By winning over developers, NVIDIA is hoping to make its technology the de facto standard in the eventual self-driving car.

NVIDIA is certainly not without competition. Following NVIDIA's announcement of the Drive PX 2 platform, Mobileye (NYSE:MBLY), a maker of camera-based advanced driver assistance systems, dismissed NVIDIA's efforts, with CTO Amnon Shashua claiming that NVIDIA's technology is too costly, with limited real-world applications. Mobileye is instead focused on real-time crowd-sourced mapping technology, called Road Experience Management, in order to enable autonomous driving. Mobileye is currently working with General Motors to integrate REM.

With its new Drive PX 2 system, NVIDIA is doubling down on its goal of providing the brains for the self-driving car. What approach ultimately wins out, whether it's NVIDIA's supercomputer or Mobileye's crowd-sourced mapping technology, remains to be seen, and it's likely that a mainstream fully autonomous car is still years away. But NVIDIA is laying the groundwork today to be a major force in the automotive industry.

Timothy Green owns shares of Nvidia. The Motley Fool recommends General Motors, Intel, and Nvidia. Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.