NVIDIA's (NASDAQ:NVDA) powerful new Tegra K1 chip allows real-time processing of camera and sensor data gathered by our cars. The 192-core processor can handle advanced visual processing such as object detection, collision avoidance, and recognition of pedestrians in real time.
Motley Fool analyst Rex Moore met up with NVIDIA's Danny Shapiro at the Consumer Electronics Show in Las Vegas. In this video, Shapiro explains the technology behind the chip, and how NVIDIA has developed its business model around software partners.
A full transcript follows the video.
Danny Shapiro, NVIDIA: There's a demo out there, if we want to show, that is about driver assistance and using the 192 cores in our brand new Tegra K1 superchip to do driver assistance.
What that means is that the cameras and sensors that are now appearing on cars can be processed in real time on the vehicle, to lead to safer driving. The ability for us now to recognize pedestrians outside the vehicle, to do lane departure warning, to do object detection and collision avoidance, are things that are going to help save lives.
Rex Moore: That's all software developed by NVIDIA as well, to go with the processor?
Shapiro: NVIDIA's model is to develop the best possible hardware on the market, and then we enable our partners with a lot of software libraries and tools. At the end of the day, though, the applications are written by others.
Just like in the video game world, NVIDIA does not write video games, but we produce the world's best hardware, and we do a lot of technology to enable game developers to build better games.
In the car space, it's the same way. We develop our visual computing module, that's automotive grade, to go inside the vehicle. We have a lot of tools, software, and libraries that we make available, but at the end of the day the applications for the in-vehicle usage -- the Google Earth interface here, for example -- is written in conjunction with Audi, but by a third party.