Here's Why Intel Makes Perfect Sense For Google Glass v2

Guess what: Google Glass isn't dead. The news that Intel will probably be found inside the next generation of Glass wasn't so much a surprise for its "x86 vs ARM" narrative, but that Google was not only still committed to the wearable project but actively developing it. Although unconfirmed, as the whispers would have it, Intel's silicon will oust the aging TI cellphone processor found in the current iteration of Glass, quite the coup for a chipmaker still struggling to make a dent in mobile. The switch is about more than just running Glass' Android fork, however: it could mean a fundamental and hugely beneficial evolution in how Glass operates and how it addresses some of the current shortcomings in battery life and dependence on the cloud.

Advertisement

I was a Glass enthusiast when Google first announced the project, and a Glass early-adopter. I'm also one of the people who has found their Glass use has narrowed dramatically in the months since; most of the time time it stays in its case on my desk, rather than on my face.

Glass' shortcomings have proved pretty fundamental to its design and the way it operates, and a big part of that is its reliance on the cloud. The majority of the processing Glassware does, and the core services like voice recognition, are handled remotely on Google's servers.

That means Glass requires a persistent data connection in order to be useful. Always-on data means you need to have your radios on all the time – either tethered to a phone or connected directly to a WiFi network – and that chews through Glass' relatively small battery in no time at all. A wearable you can only actually wear for a few hours at a time if you want to make the most of its capabilities has limited appeal.

Advertisement

Those capabilities, too, have been hamstrung. Google's insistence of running Glassware in the cloud, combined with strictness about what developers can and can't do with the wearable, has meant app innovation has all but dried up. The fact that the more affordable consumer version initially expected sometime in 2014 failed to materialize also left developers uncertain whether there was any real potential for their software.

Glass' styling was always controversial – the fact that Google botched its implementation of prescription frames, making the wearable irremovable without tools, hardly helped – and what was initially just seen as geeky soon became a totem of privacy intrusion when its camera was (unfairly, perhaps, but inevitably) branded the epitome of pervasive surveillance.

A switch to Intel's architecture may seem, on the face of it, just a matter of chips, but it could potentially address far more of those issues.

Key is Intel's focus on divorcing advanced processing of things like speech from the cloud, and instead making such functionality self-contained. Back in January, it confirmed that voice recognition in its Jarvis concept wearable – powered by the frugal Quark chips expected to show up in the next-generation Glass in 2015 – would not rely on cloud processing but instead be done on-device. Jarvis would be able to handle local services such as spoken requests for music playback, navigation with offline maps, and voice-to-text all while completely disconnected.

Advertisement

That has benefits around power as well as speed – no lag while communicating with the server and then waiting for it to respond – and Intel followed up in May with the acquisition of Ginger Software's Natural Language Processing division. That software allows for natural language voice recognition: the difference between simply being able to conversationally ask your digital assistant for something, rather than having to memorize key command phrases.

Jarvis was designed as an earpiece-on-steroids, without a display altogether. Glass v2 is likely to keep the eyepiece, but potentially make much less use of it if Google and Intel can leverage the idea of a wearable that uses whispering in your ear as its primary method of communication. Motorola's Moto Hint shows there's a whole arena for audio assistance that's only been explored in shallow terms so far.

Less reliance on a wireless connection and less time with the power-hungry screen turned on could help Glass v2 run for longer on a charge, without reducing what could be done by the wearer. If Google can combine that with a redesign that makes it more discrete – something patent applications suggest may be underway – that will also help with acceptance; Intel has already shown willingness to work with fashion experts for wearables (though the MICA smartwatch-bracelet demonstrates that doesn't necessarily make for a pretty device).

Advertisement

The lingering question is the camera, and whether Google can ever do enough to compete with the haze of privacy concerns that the wearable finds itself immersed in. I've already made it clear that I think the initial positioning of Glass as a wearable camera was both misguided and lazy – the real advantage of Glass could've been in how it delivered contextually- and location-relevant micro-notifications in a hands-free way – but there's no denying that of the uses it seems to have retained, impromptu photography tool is the most common.

Perhaps a more obvious indicator that recording is underway would be sufficient; maybe Google needs to simply bite the bullet, say "we were wrong", and drop the camera altogether. Both have their disadvantages.

What's clear to me, though, is that more solid core functionality would leave Glass v2 in a far stronger position. Local speech processing and smarter native skills would make it more useful to wearers, not to mention grant more freedom from the charging cable. Give users a more compelling device, and they'll be more willing to wear it and – vitally – more enthusiastic as they explain its virtues to those who still have questions.

Recommended

Advertisement