Google’s Project Glass is still on track to arrive with developers “early this year,” project lead Babak Parviz insists, with the wearable computer still undergoing work to refine the hardware, boost battery life, and develop compelling apps. “The feature set for the device is not set yet. It is still in flux,” Parviz told IEEE Spectrum, suggesting that Google still isn’t willing to cite specific features beyond the photo/video capture and messaging already demonstrated.
“We constantly try out new ideas of how this platform can be used. There’s a lot of experimentation going on at all times in Google” Parviz said of the development work. “We’re also trying to make the platform more robust. This includes making the hardware more robust and the software more robust, so we can ship it to developers early this year.”
Part of that hardware work is to increase battery life, with Google still aiming for all-day longevity from the headset. That’s certainly ambitious, given the limitations alternative wearables from Vuzix and Olympus demonstrate: there, continuous runtimes of around two hours are the maximum predicted, though Olympus has argued that, when used in periodic chunks, the battery in its system could last up to eight hours.
As for how wearers will interact with Glass, Parviz highlights the side-mounted trackpad that we’ve already seen Google employees make good use of. “We have also experimented a lot with using voice commands” the former augmented reality researcher says. “We have full audio in and audio out, which is a nice, natural way of interacting with something that you’d wear and always have with you. We have also experimented with some head gestures.” Previous rumors suggested Google was using a bone-conduction system for private audio playback, inaudible to anybody but the wearer.
Hardware is only half the battle, however. Parviz argues that Glass is “an entirely new platform” and, while conceding that it doesn’t offer true augmented reality in its first generation, requires a new angle on software and services. “We’ve taken pictures and done search and other things with this device” he says, though it could also involve elements pared from Google Now.
“I think since our platform allows for very quick access to information – if you need to have access to visual information, you almost instantly get it – something like Google Now could be very compelling” Babak Parviz, Google
For developers, though a full SDK for Glass is not yet available, there are a few hints as to what they can expect when coding for the headset. “When we ship this, we will have a cloud-based API that will allow developers to integrate with Glass, which enables a wide variety of Glass services while keeping a consistent user experience” Parviz confirms. “It’s the same API that we used to build the e-mail and calendar services that we test on Glass.”
With those APIs, developers will be able to deliver select data to a Glass user, rather than overwhelming them with all the information that might fit onto a typical smartphone screen. Instead, they’ll be able to pick out curated content – specific types of email, Parviz suggests – which would be shuttled to Glass and either displayed on the eyepiece or read out using text-to-speech, with spoken replies supported.
Though Sergey Brin has taken the spotlight with Google Glass more frequently, Parviz brings the technical background to the project. Before working at Google, he researched opto-electronic contact lenses for use as wireless displays, complete with wireless power.