Intel’s autonomous car plans aren’t pie in the sky, and there are already driverless vehicles using the company’s technologies plying the public roads. Vehicle tech supplier Delphi, a mainstay in the automotive manufacturing world, brought along its latest self-driving prototype to Intel’s Autonomous Vehicle Showcase in its new San Jose, CA facility, and offered to take me for a spin. Funnily enough, what’s most special, you can’t actually see.
Think of a driverless vehicle today, and you’re probably imagining a big sedan or SUV topped with an array of spinning laser scanners and dotted with cameras. Delphi’s goal with this new prototype was to put the sensors into places where they’d be suitable for production. So, there are Mobileye cameras behind the windshield, but no vast LIDAR turret on the top. Side cameras are fared in, unlike on most jury-rigged prototype cars I’ve seen.
The goal, Delphi explained to me, was to make this particular Audi Q5S as close to a regular car as possible. No cabin full of displays and extra controls; no bristling with oddly-shaped sensors. Just a virtual driver that can navigate you from A to B with no further instructions than the destination.
What’s interesting is that the driving style of the autonomous car evolved over the course of the day. Delphi started out with fairly conservative settings, the prototype leaving extra space around it, slowing earlier for turns, and generally behaving more cautiously. However, as the day went on, the engineers cranked up its enthusiasm.
By mid-afternoon, and the time I was strapped into the back seat of the heavily-modified Q5S, you could describe the AI as eager. Delphi’s engineer navigated us out of the parking lot manually, but once on the open road the car’s systems took over. This was no reticent chauffeur, afraid of either the gas or the brakes: it nipped from lane to lane swiftly to prepare for turns, and enthusiastically kept pace with the traffic around it.
That autonomous caution, you see, is all down to programming and not technical limitation. As we already saw several years ago, driverless vehicles left near-unfettered can already best human drivers on race tracks. The talents of the machine aren’t the limiting factor: it’s us, the meat inside.
Most people don’t quite trust the machine, and certainly not in their first few outings in driverless vehicles. Think of the mild concern you had when you first tried adaptive cruise control, and had to retrain your brain from slamming on the brakes as the traffic ahead slowed. On a pragmatic level you know the computers are talented; on a more instinctive one, though, it’s enough to turn what should be a relaxing aid into a stressful one.
Delphi envisages two ways of dealing with that. The first is fairly obvious, with a digital display in the center console effectively showing those inside the car what the various sensors have spotted. Other vehicles, parked or in motion, are translucent grey blobs; there are road markings, and traffic signals, and signage. When the car is about to make a turn, a big flashing green arrow alerts you to where it plans to go.
I suspect that most people will leave it on for their first handful of journeys, but then find that they switch it over to something else: Netflix, maybe, or a web browser. In that familiarization phase, though – before they learn that the car can, indeed, be trusted – it’s a great way to keep on top of what the vehicle has seen and answer the nagging “does it know there’s a dump-truck there?” paranoia.
Delphi’s other strategy is more complex. The company’s thinking now is that, for any self-driving car system to be accepted, the style in which it operates will need to be customizable. Much as the engineers changed the aggression of the prototype I rode in, users of production cars built on similar technology will be able to set their vehicle to the appropriate level to match their own comfort.
It’s not the first company to think along those lines. Back in 2014, in fact, mapping company HERE – subsequently invested in by Audi, BMW, Daimler AG, and most recently Intel – argued that autonomous cars needed to model human drivers if they weren’t to cause havoc on the roads. Working too much like a computer wouldn’t be reassuring, HERE argued, it would only serve to highlight just how different the AI is from the rest of us.
We’re still a fair way from having a dial that runs from “pensioner” through to “boy racer” on the dashboard of our driverless vehicle. One of the big issues, Delphi told me, was the high-definition mapping its cars rely upon for details of things like curb levels and which lane markings are where. In the absence of a comprehensive commercial map to buy, its been doing its own mapping. It’s a prerequisite that severely curtails where the cars can actually go.
Nonetheless that, without the Delphi wrap, you could easily mistake the exterior of the car for a regular Audi SUV is a triumph in itself. Autonomous vehicles are finally passed the “can they work?” stage and is now firmly in the “how can we make this market-acceptable?” phase.