Image source: Tesla Motors.

When Tesla Motors (TSLA 1.90%) rolled out the Autopilot feature last year it validated, at least for bulls who believe in the Musk story, just how far ahead of its competitors Tesla truly is. To their credit, they may have been on to something. After all, the Model S was a huge success, and the next-generation Model 3 has hundreds of thousands of preorders, all from a company that beat all of its peers to market with a viable autopilot option.

That all changed on May 7, 2016, when Joshua Brown, the owner of a Tesla Model S, died when his Model S drove under the trailer of an 18-wheel truck on a highway in Williston, Florida, while the Autopilot mode was engaged. It was later revealed that Brown was watching a film when the fatal accident occurred. The backlash was swift, and Tesla itself came under fire following the fatality.

Complicating matters further, and as federal watchdog NHTSA begins its investigation into the Autopilot death, questions have arisen regarding whether or not Tesla should have disclosed the death to investors amidst a large equity offering which took place in the same month. CEO Elon Musk is adamant that his company disclosed the crash to investors in an appropriate amount of time (despite the fact that Tesla knew about the crash prior to a stock offering) and that Autopilot could save hundreds of thousands of lives if it were adopted more widely.

Regardless of the reader's feelings on SEC disclosure rules, the bottom line is that Tesla has made some pretty lofty claims regarding the safety of its cars. To make an extraordinary claim one must have the extraordinary evidence to back it up, and that's where Tesla's Autopilot defense starts to become questionable. 

What happens when you assume...

This is where things get a little math-heavy.

Autopilot may indeed be safer than driving, maybe multiples safer, but Musk and Tesla Motors don't have anywhere near enough data to make the claims they're making. Musk himself criticized Fortune reporter Carol Loomis for not "doing the math" to show how safe Autopilot really is. Here's the direct quote from the Fortune article that references an email from Elon Musk:

He continued, "Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public."

If Musk is claiming Autopilot would save half a million lives per year, and therefore is twice as safe as human drivers, there's only one way to prove that "fact" to investors. Do the math. So, I did, with the help of Professor Christopher Nachtsheim of the University of Minnesota's Carlson School of Management, who also happens to be a Fellow of the American Statistical Association and is an expert in the design of experiments.  

How safe is it to give your Model S full control while driving? Image source: Tesla Motors.

How does Tesla know it's actually twice as safe?

In a response to Loomis' article, the Tesla Team gave its stance on the safety Autopilot has demonstrated. Note the use of the word "fact" in the statement below:

Given the fact that the "better-than-human" threshold had been crossed and robustly validated internally, news of a statistical inevitability did not materially change any statements previously made about the Autopilot system, its capabilities, or net impact on roadway safety.

For Elon Musk, or Tesla Motors, to say that it is a "fact" that Autopilot is safer than human drivers, the company has to have proven to a statistical certainty that its claim is true. Musk says Autopilot isn't just safer, but is two times safer than human drivers. It's this claim that Tesla's Autopilot is twice as safe as human drivers that I'll analyze below to show that Musk doesn't have the data to make such a claim. Here's what Musk and Tesla would have to do to prove such a claim based on... math.

First, to do any statistical analysis of Autopilot, we have to make some assumptions and lay out the data I'll be using. Here are the limitations I looked at based on the quotes from Elon Musk and Tesla Motors over the past few weeks:

  • Driving conditions encountered by human or Autopilot are comparable in terms of safety and fatality risk. Of course, the safety risk of using Autopilot on the highway isn't equivalent to a human driving in a city center, but we'll assume each mile is equal given the limited data.
  • This analysis is only concerned with the death rate per 100 million miles, which according to the National Highway Traffic Safety Administration is 1.08. Musk claims Autopilot could save half a million lives worldwide, but for simplicity's sake I'll only be comparing to U.S. fatality data
  • Crashes or injuries not resulting in death are not considered because we don't have this data from Tesla Motors.

We also need to use a few variables that tell us how far Tesla needs to drive to prove that it's a "fact" that Autopilot is two times as safe as human drivers. The first variable is the underlying safety of Autopilot, or the safety level if we had an infinite amount of test data.

If Autopilot is actually 10 times as safe as a human driver, then it makes sense that Tesla would have to drive fewer miles to prove it is at least two times safer than a human than if the underlying safety is really three times. Based on that variable, we can determine how many miles Tesla needs to drive to prove with 95% confidence (technically, in statistical terms this is at the 5% level of statistical significance with 95% power) that Autopilot is at least two times as safe as human drivers.

So, here are the underlying safety levels of Autopilot (on the left) and how far Tesla Motors would have to drive on Autopilot to prove that it's really two times as safe as human drivers.

Tesla's Theoretical Safety Factor

Driven Miles Needed to Prove Minimum 2x Safety Compared to Human Drivers

2.2

231.3 billion miles

3

14.8 billion miles

5

3.6 billion miles

10

1.5 billion miles

Data source: Calculations by Professor Christopher Nachtsheim of the University of Minnesota's Carlson School of Management, Fellow of the American Statistical Association.

You can see that the data needed to make any claims about the safety of Autopilot are incredibly high. Since Tesla Motors only has 130 million miles of data on Autopilot, it can't even come close to making any claims about it being a "fact that the 'better-than-human' threshold had been crossed and robustly validated." It needs at least another 1.37 billion miles of data to make the claim that it's two times as safe as human drivers (and that assumes Autopilot is 10 times better than a human), and with one death already in its small sample size, it's got a long way to go. 

Visual of a Model S making an autonomous lane change with Autopilot. Image source: Tesla Motors.

Tesla Motors' hasn't proven anything yet

The incredible amount of data needed to prove safety isn't exclusive to Tesla Motors. Google's self-driving vehicle has driven over 1.5 million miles, but many millions -- arguably billions -- more are needed to validate the product's safety. Honda is in the same boat, testing its autonomous driving technology in an abandoned California town. And BMW, despite years of testing, isn't planning to introduce an autonomous vehicle until 2021 because the German automaker wants a lot more data to prove feasibility.  

The simple fact is, Tesla Motors hasn't proven anything with any sort of statistical confidence level. And with Autopilot in beta phase and consumers acting like it's a complete product that's true autonomous driving -- partly because it's called Autopilot -- Tesla is making the product sound a lot more capable than it really is.

Life is more than a math problem

Autopilot may eventually improve vehicle safety by 2x, or more. But the math Tesla Motors is using to "prove" Autopilot is safe is incomplete, at best. And that's not the biggest problem. The more memorable fact will be that Telsa and Elon Musk are treating the safety and lives of its customers as beta tests to build necessary test data. In a blog post titled "A Tragic Loss" that Tesla released shortly after the Autopilot death news broke, the Tesla team spent four paragraphs explaining safety standards and statistics, and only on the fifth and final paragraph did it acknowledge the customer that died. A later post titled "Misfortune" called the death a "statistical inevitability". 

I'm sure automakers like Honda and BMW, and Google's test program, are terrified of their technologies resulting in a death, even if it is indeed a statistical inevitability. And that's why they're being extra-cautious in launching autonomous driving technology. It's better to be late and safe than to put an incomplete beta product into consumers' hands. We'll misuse it, just like we stretch all the other rules of driving. Real people drive over the speed limit, don't stop at stop signs, and text while driving. Of course, there are going to be a few who turn on Autopilot and turn on a movie, failing to pay attention to the road. It is, to borrow a phrase, a statistical inevitability. 

Humans are imperfect and they push technology and rules to their limits, including autonomous driving. And the fact that Elon Musk and Tesla Motors seem to be brushing off the human reality for incomplete statistical inferences is a risk investors simply can't ignore any longer. The lives of customers is more than a math problem.

Editor's note: When writing this article the author requested data from Tesla regarding how it calculates its Autopilot safety statistics. Though a reply was given, the data was not provided.