What does it mean when someone refers to "Level 4" or "Level 5" self-driving? What are the levels, and who decides which systems qualify?

The "levels" are taken from a framework established by SAE International, the professional association of automotive engineers, as a shorthand to describe how far a particular system goes in automating the task of driving a vehicle. The SAE's framework has been widely adopted by regulators, engineers, and automakers -- and by investors, too.

The SAE's framework describes six levels of vehicle automation, ranging from no automation at all -- an old-school car -- to fully automated, meaning a system that can drive at least as well as a skilled human in any situation. 

It's very important for investors looking at companies in the self-driving space to understand what is meant by the various levels, who decides what level to assign to a given system, and the differences between the levels -- particularly as they get more advanced. (I'll tell you a secret up front: Sometimes, systems described as "self-driving" really aren't.)

It hasn't always been easy to make sense of this framework. As you'd expect from a society of automotive engineers, the SAE's own descriptions of the levels are jargony and a little hard for nonexperts to parse. But for the most part, they make sense when explained in plain English -- and that's what you'll find below.

A white Chrysler Pacifica Hybrid with Waymo logos and visible self-driving sensor hardware is shown on a street in Chandler, Arizona.

A Waymo self-driving minivan. Image source: Waymo LLC.

What is autonomous driving?

First and foremost, an "autonomous driving" system is one that allows a vehicle to drive safely on public roads without human involvement. Terms like "automated driving," "driverless vehicle," and "self-driving car" are all describing the same technology: A computer "brain" and sensors that can drive a vehicle in place of a human, at least under some circumstances. 

Here are a few key things to keep in mind as we dig into the world of automated driving systems:

  • Advanced driver-assist and driverless systems rely on several types of sensors, including cameras, radar, and lidar (a sensor that uses invisible laser beams to precisely measure distances) to help the vehicle's computer "brain" understand where the vehicle is relative to its surroundings. 
  • The number of sensors used, the amount of computing power required for the "brain," and the overall cost of the system all typically increase as the level of automation increases. 
  • Self-driving systems are a form of artificial intelligence. They generally incorporate machine learning, meaning algorithms that can adjust themselves and improve their effectiveness as more data is acquired.
  • Fully driverless systems acquire vast amounts of data as they travel from place to place and encounter new traffic situations. The individual vehicles' systems typically share that data with a remote data center, which updates all of the vehicles using the system with lessons learned. This process is called fleet learning.

There's one more thing to know, and it's very important to keep in mind as you assess claims about particular systems:

  • The level assigned to any given system is assigned by its manufacturer. As of right now, there is no independent or government agency certifying a given system as Level 3 or Level 4 or whatever. Until that changes, the levels are best thought of as a manufacturer's claim: They might be a bit optimistic, so drive carefully. 

If you'd like to dig deeper, you'll find a comprehensive overview of driverless-vehicle systems here, with links to even more in-depth information. But those five points are enough for us to get started. 

Level 0 Level 1 Level 2 Level 3 Level 4 Level 5
  • No automation.
  • A little automation.
  • Control of steering or brakes, but not both, under limited circumstances.
  • More automation, but not self-driving.
  • System can control steering and brakes, under limited circumstances -- like on a highway.
  • Similar to Level 2, but with a little more time for the human to take over.
  • True self-driving, but under limited circumstances.
  • May be "geofenced" -- limited to a carefully mapped area.
  • Full self-driving with no limitations.
  • Anti-lock brakes and cruise control are still Level 0.
  • Adaptive cruise control is a Level 1 system.
  • Human driver still needs to be alert and ready to take over.
  • Controversial: How much distraction is OK? 
  • Most "self-driving" efforts are aiming at Level 4. 
  • Likely many years away.

Data source: SAE International.

Level 0: No automation

The SAE's framework begins with Level 0: No automation. The human driver is responsible for 100% of what the SAE refers to as "the dynamic driving task," meaning the work of actually driving the vehicle on an ongoing basis. (Keep that word "ongoing" in your mind as you read on -- it's a key part of how the SAE defines automation.)

Level 0 isn't that hard to understand, but even here there are some nuances. Probably the most important point is this: There are plenty of modern cars with driver-assistance features that still qualify as Level 0.

For instance, antilock brakes don't count as automation because the human still has to step on the brake pedal. Even systems that automate a momentary task -- like the automatic emergency braking systems found on some new cars -- don't count as automation for our purposes, because they don't automate the "dynamic driving task" on -- you guessed it -- an ongoing basis.

The takeaway: While your grandfather's old Buick is certainly a Level 0 vehicle, so are many modern cars that are equipped with what we think of as "driver-assist" features. 

Level 1: Some assistance for the human driver

The SAE defines a Level 1 system as one that provides either steering control or acceleration-and-braking control on an ongoing basis -- but only under limited, specific circumstances. 

What does that mean? Here's an example using technology that's probably familiar to you.

Aside from very basic entry-level models, most cars made in recent years have a cruise-control system. You've almost certainly used one, but if you haven't, the principle is simple: Accelerate to a desired speed, activate the cruise control, and the system will hold the vehicle at that speed, even up and down hills, after you take your foot off of the accelerator pedal. 

That doesn't count as "automation" in the SAE's framework, because the dynamic part of the driving task isn't automated: The human still has to be ready to hit the brakes (and deactivate the system) if there's slower traffic ahead. 

In recent years, automakers have begun offering more advanced cruise-control systems, so-called adaptive cruise control. Adaptive cruise control systems are smarter: They use radar to keep your car at a safe distance behind the vehicle ahead. If the car ahead slows, the system automatically slows your car as well in an effort to maintain a safe distance. 

With adaptive cruise control, one part of the dynamic driving task -- controlling acceleration and brakes -- is automated. Of course, it's only automated under specific circumstances, namely when you turn the system on while driving on the highway. But that's sufficient to lift an adaptive cruise-control system into Level 1. 

Level 2: Limited help with steering and braking 

Level 2 is "partial automation." It's for driver-assistance systems that provide both steering and acceleration-and-braking control, but again, only under limited circumstances. If the human driver has to intervene regularly -- for instance, when the car is exiting a highway -- then it's probably a Level 2 system. Importantly, it's not "self-driving," even if it kinda-sorta seems like the vehicle is driving itself.

That can make understanding Level 2 a little tricky. Let's take a closer look. 

Level 2 can sometimes seem like self-driving, but it isn't

Tesla's (TSLA -2.20%) Autopilot and General Motors' (GM -0.32%) Super Cruise, both of which can accelerate, brake, and steer under many (but not all) circumstances, can certainly deliver an experience that seems like "self-driving." But there's a reason that GM and (usually) Tesla are very careful not to describe them as full self-driving systems in their current forms: With both, the human driver needs to be alert and ready to take over the driving task on very short notice -- more or less instantly.  

That's a critical distinction. If a human driver is needed on short notice, then a system shouldn't be described as "self-driving." Instead, it's properly referred to as an advanced driver-assist system (ADAS). 

That may sound like splitting hairs, but it's a big deal because the expectations around a given system can affect how safe it is in practice. A Tesla Model S equipped with an early version of Autopilot was involved in a fatal accident in May 2016 when the car's system failed to recognize a tractor-trailer crossing the road -- and the human driver apparently didn't intervene to hit the brakes. 

That was the first known fatal accident involving anything close to a self-driving system, and it emphasized that these systems needed to do a better job of ensuring that human drivers were alert and ready to take control on short notice. 

That in turn led to considerable discussion within the industry: How could a system ensure that a human driver was alert without excessively annoying the human?

The challenge: How to keep the driver alert, but not annoyed?

Annoyance may seem like a trivial consideration when we're talking about avoiding fatal accidents. But consider this: If the system is too annoying to use, it won't get used. 

The current version of Tesla's Autopilot warns drivers not to take their hands off of the steering wheel while the system is in use. To enforce that, the system has sensors that aim to detect whether the driver's hands are on the wheel. 

If the driver's hands aren't on the wheel, the system gives a visual reminder after 30 seconds, followed by an audible warning after 45 seconds. After a minute with no intervention by the driver, the Autopilot system shuts off and can't be turned back on until the car is restarted. 

It's not a bad system, but reviews have suggested that it's easy to fool -- drivers can just nudge the wheel when the lights come on to reset the warning cycle, while otherwise driving hands-free.

GM's Super Cruise incorporates an elegant solution

GM's solution allows for hands-free driving. Instead of detecting steering-wheel motions, it uses a camera to track the driver's head position. If the system detects that the driver's eyes aren't on the road, it begins a series of prompts to try to get the driver's attention back on the road.

Similar to Tesla's alerts, Super Cruise's prompts progress from a flashing light in the steering wheel's rim, to a beeping sound and seat vibrations, to a voice command -- at which point the Super Cruise system disengages. But GM goes further than Tesla: If the driver still doesn't take control at that point, the system will gradually bring the vehicle to a complete stop, activate the hazard warning flashers, and call for help (using GM's OnStar system.)

GM also built a slew of additional safeguards into Super Cruise to try to ensure that it's only used in circumstances it can safely handle. For starters, if the vehicle isn't on a highway, the road's lane markings aren't clearly visible, or the system thinks that the driver isn't fully attentive -- it won't even switch on. That may sound like a recipe for constant annoyance, but it's all elegantly implemented. Most reviewers have agreed that Super Cruise is a pleasure to use.

The thing to remember: "Hands-free" isn't necessarily self-driving

To sum up, a Level 2 system is an advanced driver-assist system that can allow for hands-free driving under limited circumstances. But the human driver has to remain alert and ready to take over the "dynamic driving task" on short notice, and the leading systems attempt to ensure that the driver remains alert while the systems are in use. 

Level 3: A little closer to self-driving 

The SAE defines Level 3 as "conditional automation." The difference between Level 2 and Level 3 is a matter of degree. In practice, it depends on the answer to this question: How alert does the human in the vehicle's driver's seat have to be? 

With a Level 2 system, the driver needs to be very alert, ready to take over the driving task right away right away if the system encounters something it can't handle. With Level 3, the expectation is that the system can handle the driving as long as it's within its "operational design domain," meaning that the human's role is to be a "fallback."

Here's how the SAE puts it: 

The sustained and [operational design domain]-specific performance by an [automated driving system] of the entire [dynamic driving task] with the expectation that the [dynamic driving task] fallback-ready user is receptive to [automated driving system]-issued requests to intervene, as well as to [dynamic driving task] performance-relevant system failures in other vehicle systems, and will respond appropriately. 

Got that? If you're thinking that it's only a vague difference from Level 2, you're not alone.

Why many automakers are steering clear of Level 3

The difficulty in defining (and explaining) a difference between Level 2 and Level 3 is the problem with Level 3 in practice. It's easy for us to understand that a Level 2 system isn't true self-driving, while (as we'll see below) Level 4 is self-driving, and it's fairly easy to explain that difference to users. But Level 3 seems to exist in between. It's self-driving, except when it isn't.

That presents a challenge for engineers charged with developing a system. In a presentation in 2016, Ford Motor Company's (F 0.50%) former global product chief, Raj Nair, explained why Ford's road map for self-driving technology skips Level 3 entirely: 

We found we couldn't safely get through scenarios that really concerned us without adding technology like LiDAR and like high-definition 3D maps. Once you go to that point, you're really at the solution for Level 4. So we changed our direction from walking up driver assist technologies, the camera-based and radar-based technologies increasing at percentage, et cetera, to all the way leapfrogging into what does it take to get to Level 4, what does it take to get rid of the driver, what does it take to get rid of the steering wheel, the pedals, and then working on that technology problem.

Put simply: Once Ford engineers had added all of the technology needed to make their prototype Level 3 system meet their safety standards, they were essentially at Level 4. Given that, they reasoned, why plan to offer Level 3 at all? 

But a few Level 3 systems are headed to market

A lot of other automakers have reached the same conclusion about Level 3, but not all. There's another point of view, namely that Level 3 could be interpreted as a nicer implementation of the concepts behind Level 2 systems like Super Cruise.

That's Audi's view. Audi will launch a Level 3 system it calls "Traffic Jam Pilot" on its 2019 A8 sedan. Audi's pitch for the system effectively captures what it sees as the distinction between Level 2 and Level 3:

With Traffic Jam Pilot engaged, drivers no longer need to continuously monitor the vehicle and the road. They must merely stay alert and capable of taking over the task of driving when the system prompts them to do so.

But note: While Audi will offer this system in Germany, which recently passed laws making this type of system explicitly legal, Audi won't bring it to the United States -- at least, not yet -- because of concerns about liability and regulatory exposure.

It's not hard to see why. Even in Audi's own words, the distinction is still vague. The idea seems to be that Audi's system will give the human a bit more time to take over than a Level 2 system would. But still: Does the human in the driver's seat need to be paying attention, or not?

Fortunately, the next level doesn't require such hair-splitting to understand.

Level 4: True self-driving, but with limits 

The SAE says that Level 4 is "high driving automation": The system doesn't need a human backup at all, as long as it's operating within its "operational design domain." Put another way, a Level 4 system still has limits, but as long as it's within those limits, no human intervention will be needed -- it's real self-driving. 

Here's what that means in practice.

Most of the "self-driving" systems under development today depend on highly detailed 3-D maps to help the vehicle's "brain" know exactly where it's located, down to a few centimeters (or less). These systems generally use several lidar units to "map" the vehicle's surroundings from moment to moment. The lidar images are then compared to a stored 3D map. 

Some of the systems under development use the lidar-and-maps method as a primary method of locating the vehicle, while others use it as a backup. (One of the principles of full self-driving systems, like autopilot systems in aviation, is that all critical subsystems should have backups in case something fails.) 

A Level 4 system is true self-driving as long as the system is operating within its limits. It doesn't matter whether the human sitting in the driver's seat is distracted, asleep, or not even present -- a Level 4 system will get the vehicle safely to its destination, as long as it's operating within its intended limits. 

The front seats and dashboard of a GM Cruise self-driving taxi. There are no visible controls for a human driver.

General Motors' first Level 4 vehicle, set to enter production in 2019, doesn't have a steering wheel or pedals. They're not needed as long as the vehicle operates within specified limits. Image source: General Motors.

True Level 4 vehicles are beginning to arrive

Level 4 vehicles are coming soon -- in fact, they're already here. Waymo, the Alphabet (GOOGL -1.40%) (GOOG -1.32%) subsidiary formerly known as the Google Self-Driving Car Project, began deploying Level 4 vehicles in a pilot ride-hailing service in Chandler, Arizona, earlier this year. General Motors (GM -0.32%) has said that its own self-driving subsidiary, GM Cruise, expects to deploy a fleet of "thousands" of Level 4 taxis in dense urban environments in the U.S. starting in 2019. Others will follow over the next few years. 

But note that Waymo and GM aren't choosing locations at random. Waymo spent months "training" its prototype vehicles on the roads and traffic conditions in and around the Chandler area; GM Cruise has been doing the same with its Chevrolet Bolt-based vehicles in San Francisco. For now, those vehicles are limited to the areas with which their systems are most familiar, all of which have been very carefully mapped.

Those limits are what make the systems Level 4. For instance, no matter how the maps are used, if the vehicle is dependent on a map, that means it's limited, because it can't go anywhere that isn't yet mapped. Of course, a Level 4 system could have other limits; it may not switch on if it detects heavy snow, for instance.

Long story short: If a system offers full automated driving within limits, the limits are what makes it Level 4. 

Level 5: Full unconditional automated driving

Level 5 is the dream: Unconditional (that is, no-limits) automated driving, with no expectation that a human driver will ever have to intervene. 

Put another way, a Level 5 system should be able to go anywhere a skilled human driver could go, under any conditions a skilled human driver could handle, entirely on its own.

Needless to say, there aren't any Level 5 systems available now. A few automakers, including Tesla and BMW, have claimed that they will have Level 5 systems within a few years -- but many experts believe that true Level 5 autonomy will take years to develop, if it ever happens at all.

It's possible that Level 5 systems will be the eventual result after lots of connected Level 4 vehicles are deployed and learning. Remember what I said about machine learning and fleet learning above? Systems that are deployed in thousands of vehicles operating every day will amass huge amounts of data and encounter lots of new situations -- both of which mean they'll advance quickly on their own. Between this ongoing fleet learning and the expansion of well-mapped areas, the most widely deployed systems will effectively become Level 5 -- eventually.

Maybe.

It's also possible that a true Level 5 system -- a single system that can safely drive in a Montana snowstorm, a Shanghai traffic jam, and anywhere else a skilled human driver might be able to go -- won't come into being for many, many years (notwithstanding the bold promises made by certain CEOs). 

Why full self-driving might still be years away

Full self-driving is an incredibly difficult problem to solve. Consider that even Waymo -- which began way back in 2009 as the Google Self-Driving Car Project and has some of the best and most experienced engineers in the field -- is still having trouble getting its vehicles to safely navigate some routine traffic situations in suburban Chandler, where the situations are all known and the area is carefully mapped. How do you think they'd fare in a completely unfamiliar environment -- downtown Mumbai, for instance? 

A capable human driver from suburban Arizona would probably be able to navigate Mumbai pretty well with a smartphone and a bit of familiarization. But at least right now in 2018, it looks like it could be many years before a self-driving system will be able to do the same -- and that means that true "full self-driving" won't arrive anytime soon.