A fatal crash involving a Tesla Model S may lead the U.S. government to put limits on the company's Autopilot system. Image source: Tesla Motors.

Tesla Motors (TSLA 3.17%) has been stepping up to defend its Autopilot system after a fatal crash that has drawn an investigation by the National Highway Traffic Safety Administration (NHTSA). 

In a blog post last week, the company noted that the Autopilot system, which partially automates highway driving, has been used safely in "over 100 million miles of driving by tens of thousands of customers" around the world. 

Tesla argues that the system has already provided "a net safety benefit to society." That might be true. But there are also reasons to question how Tesla has rolled out this technology -- starting with the system's name.

The problem with Autopilot starts with the name

"Autopilot" is a great name for a self-driving system. It connotes high-tech cool while immediately communicating what the system is about. I bet some of Tesla's competitors wish that they'd come up with the name themselves. 

But here's the thing: Autopilot isn't really a true autopilot. At least, not yet.

Tesla describes the current version of Autopilot as a "beta" and has made it clear that it will add sensors, features, and capabilities to the system over time. But right now, it's not much different from the advanced "adaptive" cruise controls available on vehicles from the German luxury-car makers -- and even from mainstream brands like Ford (F 0.41%)

Teslas with Autopilot use cameras, ultrasonic sensors, and radar to detect objects and road markings -- and they can brake and steer when needed to keep you safe and in the flow of traffic. Tesla goes a bit beyond competitors in a few ways -- specifically, by adding the ability to change lanes for you. But the technology is not too different from what you'll find in a loaded Mercedes-Benz -- or even on a very well-equipped Ford Fusion.

Tesla's Autopilot uses a combination of sensors and cameras to monitor the car's environment. Image source: Tesla Motors.

Like those competitors' systems, Tesla's Autopilot is considered a level 2 system under the Society of Automotive Engineers' categorization of driving automation systems. Level 2 systems are considered "partial automation": While the system can accelerate, brake, and steer, with a level 2 system it's still up to the human driver to monitor the driving environment and to take over in at least some driving modes.  

That's not true automated driving. To its credit, Tesla has been pretty clear -- at least, officially -- about the limits of Autopilot in its current incarnation. CEO Elon Musk himself has repeatedly warned that owners should keep their hands on the steering wheel and remain alert while using the system. 

But plenty of Tesla drivers, perhaps basing their expectations on the system's name rather than on the company's caveats, have gone well beyond those official limits. The victim of that fatal Model S crash was one of them. That's likely to lead the NHTSA to take action.

Will the feds crack down on Autopilot?

There are reasons to think they will. 

The idea of "beta testing" a safety system has never sat well with other voices in the industry. Now that there has been a fatal crash, it seems likely NHTSA will take action of some kind. While it's unlikely that the agency will force Tesla to withdraw or disable the system, I suspect Tesla will be pushed (or forced) to build in more explicit limitations on its use. 

Musk is absolutely right when he says that safety systems like Autopilot are beneficial.  But that's true only if they're used with the proper expectations. As with many things Tesla Motors, I think expectations around Autopilot have gotten out of hand -- and unless Tesla imposes some limits on Autopilot proactively (and soon), it may fall to the feds to rein it in.