It was, frankly, the most boring car ride of my life. The Chrysler Pacifica minivan drove me around several city blocks in Mountain View, California, all on its own, with mechanical precision and granny-like caution. As two Waymo engineers monitored the action—such as it was—via laptops in the front seats and a communications rep called play-by-play, with me in the middle row, the sensor-laden robocar navigated intersections, yielded deferentially to pedestrians and turned into a goody-goody Ned Flanders when challenged by the human bullies that cut it off or subtly demanded right of way.
The only excitement came when the car caught a whiff of some unseen, ghostly threat and panic-braked twice in the space of three seconds, before moving on as though nothing had happened. Having enjoyed many such moments in my own driving life—spooked by some glimmer of movement in my peripheral vision, briefly spazzing out at the controls—I chuckled at the familiarity and immediately bonded with my digital chauffeur. Been there, bro . . . .
By and large, however, the driving was flawless, a sedate but completely competent execution of the average suburban errand, replete with quirky, unpredictable adversaries—motorised and otherwise—and all the roadway signs, signals, markings and instructions that humans process nearly subconsciously. Momentary freak-out aside, it provided a ride that was thoroughly uneventful, as it should be. The drive system created by Waymo, which is owned by Google parent Alphabet, now has millions of miles of training under its artificially intelligent belt, and it shows. I concluded, perhaps prematurely, that I’d gladly let one drive me to work any day of the week.
If you’ve been paying attention, you’ve likely heard that this is our future. One day we’ll all be driven around our world entirely by computerised cars. They’ll be more efficient, safer, more capable and completely unflappable in the face of conflict, whether inter-auto or interpersonal. (Indeed, road rage will have to be programmed in—or at least learned via some glitch in the matrix.) Your car will drop you off at work, then go shuffle your kids to school or collect Grandma from the nursing home to take her to the mall. It may even earn you some extra scratch as a taxi when nobody’s using it. Of course, this assumes you even own the thing in the first place—something many speculators, along with, unsurprisingly, autonomy advocate Uber, think to be unlikely. Our future will be managed by a vast fleet of thoroughly anonymous, completely autonomous cars, and we will simply summon a new one out of the ether every time we need to go somewhere.
Or maybe not. The truth is, this fantastic future vision may actually not arrive quite as fully formed as some have been claiming for a decade. In just the last year, carmakers and autonomy start-ups that were once bullish on the prospects of fully self-driving cars have been quietly backpedaling their claims about what’s possible, and on what timeline. This waffling has to do, partially, with public perceptions, which vacillate wildly between excitement over every new demonstration, confusion about terminology and capability and concern about safety, hackability and privacy.
In the big picture, this is a fairly predictable correction, given the challenges and the fact that this is a completely new thing. “The manufacturers have definitely been doing some right-sizing of expectations,” says Jeremy Carlson, an autonomy analyst with London-based auto-industry research firm IHS Markit. “How consumers will adopt and interact with this technology is unknown, and the lack of progress in the regulations that would actually allow the technology on public roads is noteworthy. That’s not to say that things are necessarily slowing down in terms of development, but the expectations we’re seeing for full autonomy certainly are.”
The fact that one of Uber’s self-driving test vehicles struck and killed a pedestrian in Scottsdale, Arizona, in early 2018 is also part of the optics problem and leads directly to the other reason behind manufacturers recalibrating their position on autonomy: technological challenges, specifically a car’s ability to read and interpret the environment around it. Though the pace of autonomy development is ramping up, with new announcements about partnerships, deals and tests arriving almost daily, driving in the real world is an incredibly nuanced process that requires far more than reading street signs and being able to accelerate, steer and avoid hitting people. Although we’ll certainly see massive progress and many limited services and features over the next decade, some don’t think computers will ever be able to manage driving completely on their own or as well as humans can.
This is because as it turns out, we’re actually quite good at driving, all things considered. Though plenty of commuters and road warriors, along with a both wildly entertaining and horrifying collection of dashcam videos on YouTube, might suggest otherwise, there’s also plenty of evidence that we’re not playing Death Race 2020 out there every day. In my own 30 years of driving, I’ve seen precisely two accidents happen before my eyes, a few dozen gaffes and faux pas and a handful of incidents that might be described as egregiously boneheaded driving. But it’s not generally a daily occurrence. Yes, distraction is a major problem, and yes, impaired driving remains a really huge deal, but in terms of the simple physical act of driving—well, when I take a few seconds on the road to pay attention to the myriad inputs, processes and second-by-second nuances of the discipline, it becomes clear that there’s an awful lot going on in my brain and body that make it happen safely and smoothly. I couldn’t begin to write out a script for getting a total noob through a stop sign, let alone a computer, but I do it dozens of times a day without thinking about it.
Therein lies the great wall before us—or rather, before our computers. “There are basically two camps,” says robotics engineer and former Navy fighter pilot Missy Cummings, director of Duke University’s Humans and Autonomy Lab, North Carolina. “First are those who understand that full autonomy is not really achievable on any large scale, but are pretending they are still in the game to keep investors happy. Second are those who are in denial and really believe it is going to happen. When you also consider that not everyone is a techie and loves the bells and whistles of advanced systems, I think the automotive industry is in for some tough times ahead.”
What’s blocking this transition? It’s the core of what gets me through a great many of those stop signs along with thousands of other daily, often unnoticed, challenges: intuition and judgment. In other words, two things computers don’t actually have.
At the moment, in terms of computer-assisted driving, we live entirely in a world of advanced driver assistance systems, or ADAS. Think blind-spot warnings, automatic emergency braking, lane centering and adaptive cruise control, which uses sensors to keep track of vehicles around you and automatically maintain your distance from them. Such features make up Levels 0, 1 and 2 of the five-level autonomy scale as outlined by the Society of Automotive Engineers International (SAE). They assist drivers but still require uninterrupted driver attention. Levels 3 and 4 would be hands-off, eyes-off systems that can drive in most conditions but, on the lower end, require the driver to take over should the car not be able to manage a situation. A Level 5 system is the holy grail—a car that can operate in all conditions without human involvement or even the need for humans to serve as backups. Or for that matter, even be in the vehicle at all.
Public confusion about Level 3-this and Level 5-that is a big part of the present hand-wringing, and the truth is the SAE’s important (but relatively technical) designations should have probably remained inside-intel, with the consumer needing to only know the difference between ADAS and full self-driving capabilities, along with a few limitations clarified for each system. Because, at the moment and for the foreseeable future, there are zero publicly accessible cars that function above Level 2. These includes Teslas with the company’s vaunted Autopilot. That system allows Teslas to also drive down the motorway on their own and pass other cars, though it doesn’t allow hands-free driving for the durations that Super Cruise does. Autopilot is considered by some as borderline Level 3, but it, too, requires persistent driver attention—despite online videos showing people spoofing the car’s steering-wheel sensor, which determines whether you’re engaged, reading, sleeping or even climbing into the back seat, while their Teslas drive them down the road. Twitter fanboys be damned, it’s actually a Level 2 system.
The technology does exist to enable Level 3 driving—Audi’s A8 has the necessary hardware—but it’s hamstrung by a market that isn’t quite ready for it yet. Of particular concern is that humans have to be ready to take over, yet carmakers aren’t particularly confident in their ability to do so at a millisecond’s notice. Nor are regulators, lawyers and insurance companies quite sure yet how to deal with cars driving themselves on the roads, from a safety and liability perspective.
Short term, it will take several years for those initial concerns to be addressed to comfortably enable Level 3 driving, but some innovators are looking to leapfrog straight over that anyway, jumping to Level 4 autonomy, where no driver involvement is necessary in almost all cases. Carmakers and autonomy start-ups are hard at work, often in partnership. Waymo is developing a system that can be deployed in any vehicle, though it’s working with Jaguar and Chrysler to field its initial products. Meanwhile, Argo AI is partnered with Ford and Volkswagen in its effort to bring self-driving taxis to market, and Zoox is on the verge of revealing its purpose-built autonomous taxi to the world, beginning service as soon as next year within confined, geo-fenced areas. At the moment, these manufacturers are generally targeting Level 4 capability, in which the occupants won’t have to pay attention or intervene at all, unless they want the vehicle to stop for any reason—in which case it will simply pull over when it’s safe.
They also share broadly similar technological approaches: loading up the cars with sensors and then installing a detailed, constantly updated basemap to help plug gaps in what the sensors are able to detect. “We subscribe to the practice of using the right tool for the job,” says Argo CEO Bryan Salesky, whose company has partnered with robotics experts at Carnegie Mellon University in Pittsburgh, where it’s based. “So we use a mix of hardware and software solutions. It’s critical to use a multimode approach with [laser-based] LiDAR, cameras and radar since each has its strengths and weaknesses, although together they provide the redundancy and robustness required for a safe and reliable perception system.”
But questions remain about whether these systems will function in all possible conditions, even within those well-mapped, geo-fenced areas—in particular regarding the so-called “corner cases,” such as snow-covered road markings, flooded roads during rainstorms or emergency crews gesturing to drivers with hand signals. The answer is dependent on the combination of the sensors and the basemap—knowing, for instance, where the road markings are in a whiteout scenario—but also in a slow rollout of the entire system, beginning in geo-fenced urban environments in which conditions can be monitored by a command centre.
The world may be complex but it is also finite, and a detailed, rigorously maintained basemap will likely go a long way toward enabling near-autonomy. Beyond that, it will be up to the car’s computers to keep everyone safe.
In average daily driving, that’s eminently achievable, and likely a much better proposition when it comes to averting accidents and keeping traffic moving. “This is where we actually have a big advantage relative to humans,” says Jesse Levinson, chief technology officer at Zoox. “As a human, you can really only pay attention in one direction at a time, and even then, sometimes, people get distracted. Our sensors are measuring everything around us many, many times every second, and they’re always paying attention. So even if an unidentified object darts in front of the vehicle or a sinkhole develops, we have a much better chance of noticing that and reacting faster than the human could.”
A fair point, but even that will only take you out of so many corner-cases. What struck me most about my ride in Waymo’s Pacifica wasn’t how it managed stoplights or turn lanes, but whether the machine would ever deliberately behave as though it were human. Drivers glance around constantly, make eye contact, signal pedestrians and do random, weird stuff like juking around tiny divots in the pavement to avoid a momentary unpleasant jostle. Will the future autonomous cars we’ve been promised do that, too? Going further, will the cars be able to develop a sixth sense about vehicle or pedestrian intentions, or draw conclusions from seemingly disparate details—a wind storm and a precariously leaning tree, for example—that astute drivers would instinctively avoid?
That’s where things get fuzzy, and where experts like Cummings are most skeptical about Level 5 driving. In a recent paper, she cites the limits of artificial intelligence in managing everything from a camera’s ability to properly identify objects in unexpected positions—say, a motorcycle lying on the ground after an accident—to what AI specialists call “brittleness.” This refers to an AI system’s ability to adapt to conditions and not rely just on code to bail it out of uncertainty. There’s also concern about excessive reliance on machine learning (i.e., those miles and miles of system training) to anticipate every single conceivable scenario and a computer’s ability to replicate the kind of “top-down” reasoning essential for truly advanced decision-making. That means the computer assesses a situation in its entirety, rather than just building up an understanding from a thousand disparate elements. The former is how humans work; the latter, well, isn’t. “Autonomous systems today can only realize very specific instances that they have been taught to both recognize and act accordingly,” Cummings says. “Until we can figure out how to approximate top-down reasoning, full self-driving will be elusive.”
That doesn’t mean we won’t all be better off with something that still comes up short of that lofty standard. Computerized safety systems and automotive copilots will do wonders in terms of avoiding crashes and ensuring that fatigue, confusion or impairment don’t spell doom for hapless citizens, and the convenience of semi-automated driving will drastically take the edge off of commutes while allowing us to binge-watch TV shows to our hearts’ content.
This will arrive, with near certainty, over the next decade. You won’t be able to send your car to pick up your kids at camp, nor will you be able to sack out while being chauffeured through the snow up to St Moritz. But you will be able to stretch your endurance, reduce your fatigue on long trips, get those reports done before reaching the office and, most importantly, drive assured that the computer has your back: that it’s staying vigilant far beyond your ability to do so, will be able to react far faster when some sort of weirdness comes at you at 80 mph and will be impervious to error, misjudgment and even meddling by external forces, whether at the code level via hacking and viruses or through deliberate spoofing on the road. That is, it’ll know when a stop sign is present as well as when one should be there but has been removed. But the gulf between “capable” and “intelligent” is large enough to drive a bus through, so don’t call your ride “autonomous” until it actually is smarter than you are.