Waymo is drawing a line in the sand for driverless terminology, announcing today that it will no longer refer to its vehicles as “self-driving” and arguing that the industry needs more consistency in how it refers to cars that can operate themselves. Though the language change may seem slight, the Alphabet-owned company concedes, it’s actually key to making sure human drivers know just what is and isn’t required from them in future vehicles.
As Waymo puts it, “precision in language matters and could save lives.” In a blog post published today, the company says that it will only be referencing its technology as “autonomous” rather than “self-driving.”
“We’re hopeful that consistency will help differentiate the fully autonomous technology Waymo is developing from driver-assist technologies (sometimes erroneously referred to as “self-driving” technologies) that require oversight from licensed human drivers for safe operation,” Waymo says. Currently, though there are vehicles on sale in the US with Level 2 advanced driver assistance (ADAS) technologies that can help with lane-centering, maintaining pace with traffic, and even allow the human driver to take their hands off the steering wheel, there’s no commercially-available Level 4 or Level 5 vehicle which can actually operate itself with no human oversight.
“Unfortunately, we see that some automakers use the term “self-driving” in an inaccurate way, giving consumers and the general public a false impression of the capabilities of driver assist (not fully autonomous) technology,” Waymo adds. “That false impression can lead someone to unknowingly take risks (like taking their hands off the steering wheel) that could jeopardize not only their own safety but the safety of people around them.”
Though Waymo doesn’t mention any other autonomous project by name, it’s hard not to see the announcement as being a criticism of automakers like Tesla. Elon Musk has long promised his EVs will one day be able to fully drive themselves, and indeed sells a “Full Self Driving” package on cars like the Model 3 and Model Y. Despite very limited beta tests and demos, however, no Tesla actually has the full functionality of that package enabled, and even those in the beta are warned by the car that they’re still responsible for how it drives.
Setting operator expectations of how capable their vehicles are – or when they might not be able to offer the usual suite of driver-assistance functionality – has become a controversial topic in recent years. With ADAS available on more and more cars, trucks, and SUVs, getting the so-called handover process right is something automakers and regulators are focusing on.
In a Level 4 or Level 5 autonomous vehicle (AV), the idea is that the human occupant wouldn’t be called upon to operate it at all. If the AV couldn’t deal with a certain environmental challenge – such as particularly bad weather – then it would be designed to cease operations, rather than hand control over to someone riding in it. That’s a distinct difference to Level 2 and Level 3 systems.
In those, the responsibility for vehicle operations is shared between car and driver. A Level 2 system like GM’s Super Cruise, for instance, can offer hands-free lane-keeping and maintain pace with traffic on mapped divided highways. However, the human driver is still expected to be paying attention to the road, as they might be called upon to retake control at any time. Tesla’s Autopilot, though currently able to not only do lane-keeping and maintain pace with traffic, but also automatically change lanes if the driver has enable that, still requires a hand on the wheel and could pass back responsibility to that driver at any point.
Waymo currently offers a ride-hailing service, Waymo One, in certain parts of Arizona, where its autonomous vehicles can be summoned via an app. Still, it’s a long way from offering a commercial service more broadly than that, in part because of the disparity of regulations around AVs in different US states.