Google’s driverless cars being let loose onto the roads of Nevada has re-awakened concerns around robot vehicle security, with experts unconvinced that the increasingly complex kit is safe from malware. Fears around the future vulnerabilities of cars left to guide themselves, though perhaps not of significant concern today in Google’s small-scale trial, nonetheless persist given the likelihood of commercial implementations of self-driving hardware, with researchers pointing to a mixed track record in locking down infotainment and other systems in “dumb” cars to-date.
Currently, although Nevada has approved the road trials of the Google test cars, each must be accompanied by two human riders, one of whom must be able to take the wheel should the system malfunction. The cars use a combination of high-speed pedestrian and obstacle tracking, among other things, to ensure they do not plow into other road users, but the autonomy can be over-ridden if the manual driver touches the steering wheel or brake pedal.
That hands-on recovery may not always be possible as driverless tech grows more commonplace, experts warn. In a report from McAfee and Wind River last year [pdf link], the Intel-owned companies highlighted the growing amount of code present in new cars and how manufacturers have made the embedded systems that run it remotely accessible so as to allow for periodic upgrades.
“Frost and Sullivan estimates that cars will require 200 million to 300 million lines of software code
in the near future” the report’s authors write. “The increasing feature set, interconnectedness with other embedded systems, and cellular networking or Internet connectivity can also introduce security flaws that may become exploitable.”
Google has been upfront with the software considerations from its first public comments on the driverless car research. “Safety has been our first priority in this project. Our cars are never unmanned” software engineer Sebastian Thrun wrote back in 2010. “We always have a trained safety driver behind the wheel who can take over as easily as one disengages cruise control. And we also have a trained software operator in the passenger seat to monitor the software.”
It’s when that trained software operator is no longer onboard that causes security experts the most headaches. With existing infotainment systems already complex enough to prompt dealers into employing tech specialists to guide new drivers through their navigation, multimedia and other equipment, relying on them to also keep track of autonomy software stability may be overly-ambitious.