Tesla (TSLA -3.55%) is on the leading edge of autonomous driving technology, thanks in part to the valuable data provided by its nearly 2 million autopilot-enabled vehicles. In this Backstage Pass clip from "The AI/ML Show" recorded on Jan. 12, Motley Fool contributor Trevor Jennewine explains this and several other evolutions in self-driving technology currently underway at Tesla. 

10 stocks we like better than Tesla
When our award-winning analyst team has a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

They just revealed what they believe are the ten best stocks for investors to buy right now... and Tesla wasn't one of them! That's right -- they think these 10 stocks are even better buys.

See the 10 stocks

 

*Stock Advisor returns as of January 10, 2022

 

Trevor Jennewine: In 2014, Tesla started equipping its vehicles with autopilot hardware. They called this hardware version 1.0. It had one external camera supported by RADAR. There is 12 sonar sensors and it featured a Mobileye processor. Mobileye has since been acquired by Intel. In 2016, the company upgraded to autopilot 2.0 and the big change was adding eight external cameras instead of one. Still supported by RADAR, 12 ultrasonic sensors and they swapped the processor from Mobileye to Nvidia. In 2017, they changed their RADAR supplier, they called this autopilot 2.5. In 2019, Tesla swapped out Nvidia's processors for its own inferences chips. That's the theme here. Tesla is doing everything its own way. They are building all their own tech. I'll stick to that theme as we go through.

In 2021, Tesla transitioned its Model 3 and Model Y built in North America to vision-only, meaning, it's no longer using RADAR data to make those inferences to power the car. It shows those cars specifically because they're higher volumes, it allows data to be collected quickly. The plan is to switch the other models over, even those built outside North America, eventually. That's the hardware. On the software side, every Tesla comes standard with autopilot software, drivers can upgrade to full self-driving software for a one-time payment of, they've just raised it to $12,000 or monthly payments of $199.

Then I have some pictures down here just to show you some of the features. Autopilot, you have traffic aware, cruise control, and auto steer, full self-driving. You have the ability to navigate, which is if you're in the middle lane on the highway, it'll take you over a different lane and it will help you take the exit ramp. This auto lane change auto park summon. Then there's the full self-driving data, which is being rolled out to a limited number of people depending on the safety score, which Tesla measures based on your driving. This has features like traffic light and stop sign control where the car will recognize the traffic lights, stop signs, and slow it to a stop. Auto steer on city streets. Navigating a city street is more difficult than navigating on a highway. All those features are added as they come out through over the air updates.

If you're familiar with the space at all, you may wonder why Tesla got rid of RADAR and why it's not using lidar. There's more complex explanations. But one of the things that Elon Musk has said, the logic goes something like this. There's only one system in the world right now that can operate a vehicle on a regular basis, and that's the human brain. When you drive, you don't use radio waves to measure distance or velocity. You don't use laser pulses to do the same thing. You don't use RADAR or lidar, you do everything with your eyes, which are essentially cameras. The only difference is that Tesla's cameras can see better than you. There are eight cameras with overlapping fields of vision, so they can see in 360 degrees. The only variable left is the decision-making capacity of your brain, which means that the problem can theoretically be solved with vision alone because you already do it every day when you drive. It's just a matter of getting the AI right. There is another reason and I'm going to touch on that toward the end. But there's another reason that Tesla's focusing on vision instead of using vision plus lidar plus RADAR.

This slide right here shows the cameras, the coverage zones provided by the eight external cameras. There's three on the front, one wide, one narrow, one in between. There are two forward-looking side cameras and two rearward-looking side cameras, and then one directly on the back. The tiny little black dot in the middle is meant to represent the car. You can see the coverage zones. Around the car, there's a small yellow halo. This is the ultrasonic sensor. Range, it's much smaller. It's more effective for detecting nearby cars, like if somebody was coming into your lane.

We've covered the evolution of Tesla's autopilot and full self-driving and we looked at how the cameras inside with the ultrasonic sensors are positioned. In order to build a self-driving car, you need hardware, software, and data. Those things together allow you to train the deep neural networks that will allow the vehicle to perceive and move safely through its environment. The deep neural networks are the artificial intelligence engine. Just to define that term a little bit. Neural network is a series of algorithms that are specifically designed to mimic the way neurons in the human brain work. They are the backbone of deep learning.

I know we've talked about this before, but I have this slide, this is from IBM. Deep learning is the narrowest. When you look at artificial intelligence machine learning, deep learning is the most narrow term. The difference between deep learning and machine learning is that in machine learning, there is more human involvement. Humans are involved in extracting the necessary features and then feeding them back into the model and saying, "Hey, you did this right, you did this wrong." It involves a lot of human interaction. With deep learning, they feature neural network algorithms that learn important features and data by themselves. The takeaway is that there's less human intervention with deep learning.

Back to Tesla's driving data. Tesla gets its data. It sources it from nearly two million autopilot-enabled vehicles and uses it to train the neural networks to detect objects, segment images, measure depth in real time. I mentioned that since October 2016, they've had eight cameras [inaudible 05:09:08]. Since that time I went back and looked at all their press releases, used 1.9 million vehicles. They are coming up on 2 million vehicles to have the 8-camera array and they are able to source data from that fleet. Then the onboard supercomputer the FSD chip, runs those deep neural networks. They analyze the computer vision inputs from the cameras in real time to understand, make decisions, and move the car through the environment.

Theoretically, Tesla's massive fleet means that it has more real world data than its rivals. Theoretically, that should translate into better deep learning models. To put that in perspective, Tesla's head of AI Andrej Karpathy noted that the company had over three billion miles worth of real world driving data back in February of 2020. Some analysts believe that figure crossed 5.5 billion in 2021, and that seems reasonable, but Tesla has not updated the figure. For perspective, in January 2020, Waymo reported 20 million miles worth of real world driving data and in December 2020, Cruise reported two million miles. There's an order of magnitude difference there between real world and Tesla's real world miles and other companies. That being said you can train these neural networks with simulated miles. Every company is doing that so I don't think anybody really has an advantage over the other there, but Tesla does have more real world data.