Valve's Gabe Newell Talks Wearable Computing, Touch And Tongues
When he's not trash-talking Windows 8, Valve's Gabe Newell is pondering next-gen wearable computing interfaces and playing with $70,000 augmented reality headsets, the outspoken exec has revealed. Speaking at the Casual Connect game conference this week, Valve co-founder and ex-Microsoftie Newell presented head-up display lag and issues of input and control for wearables as the next big challenge facing mobile computing, VentureBeat reports.
"The question you have to answer is, "How can I see stuff overlaid in the world when you have things like noise?" You have weird persistence problems" Newell said, asked about the post-touch generation of computing control. "How can I be looking at this group of people and see their names floating above them? That actually turns out to be an interesting problem that's finally a tractable problem."
Tractable it may be, but so far it's not cheap. "I can go into Mike Abrash's office and put on this $70,000 system, and I can look around the room with the software they've written, and they can overlay pretty much anything, regardless of what my head is doing or my eyes are doing. Your eyes are actually troublesome buggers" Newell explains. The second half of the issue, though, is input, which the Valve CEO describes as "open-ended."
"How can you be robustly interacting with virtual objects when there's nothing in your hands? Most of the ideas are really stupid because they reduce the amount of information you can express. One of the key things is that a keyboard has a pretty good data rate in terms of how much data you can express and how much intention you can convey ... I do think you'll have bands on your wrists, and you'll be doing stuff with your hands. Your hands are incredibly expressive. If you look at somebody playing a guitar versus somebody playing a keyboard, there's a far greater amount of data that you can get through the information that people convey through their hands than we're currently using. Touch is...it's nice that it's mobile. It's lousy in terms of symbol rate" Gabe Newell, CEO, Valve
Google's Glass has sidestepped the issues somewhat, not attempting to directly overlay or replace exact objects in the real world with a wearable display, but instead float more straightforward graphics just above the wearer's eye-line. That removes the precision problem, but means Glass will be less capable of mediating reality – i.e. changing what of the real world the user actually sees – and more about augmenting it with extra data.
As for control, Google has already shown off its side-mounted touchpad on Glass, and a recently published patent application fleshed out some of the other possibilities. They include voice recognition and hand-tracking using cameras, though Google also describes using low-level artificial intelligence to reduce the amount of active navigation Glass users may have to do.
For instance, Glass could recognize – using microphones built into the headset – that the wearer is in a car, Google explains, and thus automatically show maps and a navigation interface. Those same microphones could be used to spot mentions of peoples' names and call up contextually-relevant information about them, working as an aide-mémoire.
Somewhat more bizarre, though, is research within Valve to use the human tongue as an input method. "It turns out that your tongue is a pretty good way of connecting a mechanical system to your brain" Newell explained. "But it's really disconcerting to have the person you're sitting next to going, "Arglearglargle." "You just Googled me, didn't you?" I don't think tongue input is in our futures."