Google has quietly patched a Glass security exploit that could have allowed hackers to take control of the wearable by showing it a QR code, the researcher who identified the flaw tells SlashGear. The exploit, discovered by Marc Rogers, Principal Security Researcher at Lookout Mobile Security, took advantage of Glass’ streamlined setup process that saw the camera automatically – and transparently to the wearer – spot QR codes in images and use them to trigger WiFi connections and other configurations. By creating malicious codes, and hiding them in images, Rogers was able to get Glass to connect to a compromised network, show details of all network traffic from the wearable, and even take full remote control.
The exploit – which we referred to in our June interview with Rogers, though without specific details as Google and Lookout were still addressing the fix at the time – has been fixed as of Glass firmware XE6, released on June 4. It’s a turnaround the Lookout researcher is impressed by, after only informing the search giant of the issue on May 16. “This responsive turnaround indicates the depth of Google’s commitment to privacy and security for this device,” he says, “and set a benchmark for how connected things should be secured going forward.”
At the root of the issue was how Google attempted to handle Glass setup, given the non-traditional input options the wearable offers. Without a keyboard, and with only voice-recognition and minimal trackpad access using the small panel on the side of the headset, the Glass team turned instead to visual setup tools.
Using QR codes – the glyphs also known as “2D barcodes” – Glass could be set configured to connect to a certain WiFi network, Bluetooth device, or something else. So as to minimize the need for the user to strum through the menus, Glass would automatically identify any QR codes in images snapped with the camera, and act on them automatically.
It’s that automation – which came with no notification to the user that codes had been spotted and acted upon – which opened up the loophole Rogers could take advantage of. By reverse-engineering Google’s QR codes, he could create a range of his own glyphs that would instruct Glass to connect to a WiFi network of his choosing. Using the software tool SSLstrip, he could then gain access to all of the network traffic from the wearable, such as messages, emails, and Hangouts calls.
Taking it one step further, by pushing Glass to a page on the wireless access point that took advantage of an Android 4.0.4 vulnerability, Rogers could then hack the headset itself and actually take control of it, even to the point of remotely turning on the camera and seeing what the wearer was looking at.
As of XE6, Google has changed the Glass software so that the camera will only identify QR codes when the user specifically triggers scanning through the settings, rather than looking for them proactively. The use of 2D barcodes for settings was seen as a first step for the technology and wearables; more everyday examples could have been automatically translating menus in foreign languages, or automatically downloading music tracks from QR codes discretely embedded in band posters.
The Lookout researcher doesn’t expect this to be the last vulnerability identified in Glass, though he also argues that it’s probably a good thing. By running through the hardware and software in limited “Explorer Edition” public trials first, he points out, by the time the consumer version arrives – expected sometime in 2014 – users will be more “able to trust Glass … because it has been tested.”
Still, it’s indicative of a largely unconsidered issue as more and more devices get not only smarter but increasingly autonomous. “When you have billions of connected devices, without UIs, how do you manage updates?” he asked, rhetorically, warning that we could see a new age of potential loopholes as ways of patching flaws lag behind functionality.
Next up, Lookout intends to pare through other connected devices in other fields – Rogers told us he’s looking at car manufacturers, environmental controls, and smartwatches – to see what exploits he can uncover. If the developers of those gadgets are looking for a good example of updating practice to follow, though, they could do worse than mimic Google, he says. Otherwise, poorly-managed security could lead to the public simply not trusting tomorrow’s gadgets.
“There’s a risk that we will get a little bit scared by new things, and there’s a risk that we could miss out on cool things [as a result]” Rogers explained, if the flaw hadn’t been spotted until the commercial model. It’s an example of how the so-called “internet of things” raises new challenges to security experts and manufacturers, he says, especially given that some of the companies developing such devices are specialized in either software or hardware, but seldom both.