DIY Google Glass puts iOS in front of your eyes

Google may be beavering away on the last stages of Project Glass before the Explorer version arrives with developers, but meanwhile DIY wearable computers are springing up, some with Apple's iOS at their core. A straightforward combination of an iPod touch, off-the-shelf wearable display, Bluetooth camera and a set of safety goggles was enough for AI researcher Rod Furlan to get a glimpse at the benefits of augmented reality, he writes at IEEE Spectrum, though the headset raised as many questions as it provided answers.

Furlan's hardware falls roughly in line with what we've seen other projects piece together in earlier AR attempts. He opted for a MyVu eyepiece – a 0.44-inch microdisplay culled from a cheap Crystal headset, such as used in this UMPC-based wearable back in 2009, and this Beagleboard version in 2010 - hooked up to the composite video output of a 4th-gen iPod touch; that way, he can see a mirror of the iPod's UI floating in his line of sight.

Meanwhile, a Looxie Bluetooth Video Camera - stripped of its casing and attached to the goggles – streams video to the iPod touch wirelessly. Furlan says he's cooking up a second-gen version running off a Raspberry Pi, again another approach we've seen other wearables experimenters take. That, Furlan says, will allow for more flexibility with the Looxie's input, as well as greater support for other sensors such as accelerometers.

The interesting part is how Furlan's experience of the wearable evolved, from initial discomfort and a sense of information overload – the feeling of needing to keep up with every notification, server status, stock price, and message that pops up – to a less conscious consumption of the data flow:

"When I wear my prototype, I am connected to the world in a way that is quintessentially different from how I'm connected with my smartphone and computer. Our brains are eager to incorporate new streams of information into our mental model of the world. Once the initial period of adaptation is over, those augmented streams of information slowly fade into the background of our minds as conscious effort is replaced with subconscious monitoring" Rod Furlan

That fits in line with what we've heard from Google itself; Glass project chief Babak Parviz said recently that part of the company's work on software has been to deliver a pared-back version of the usual gush of information that hits our smartphone and tablet displays. Developers, for instance, will be able to use a set of special cloud APIs to prioritize specific content that gets delivered to the Android-based wearable.

Furlan concludes that the biggest advantage of wearables won't be overlaying data on top of the real world – what we know as augmented or mediated reality – but being able to persistently record (and recall) all of our experiences. That does differ from Google's perception, where capturing photos and videos is only seen as a subset of Glass, and the headset is gradually being positioned as a way to access a curated feed of the digital world, whether that be from Google Now prompts or something else.

[via] 9to5Mac]