If you’re still laughing at Google+, and at Google Glass, then it might be time to stop; Google has just shown that they’re its next route to digitally understanding everything about you, and it slipped that through in the guise of a simple photo gallery tool. Highlights is one of the few dozen new features Google+ gained as of I/O this past week, sifting through your auto-uploads and flagging up the best of them. Ostensibly it’s a bit of a gimmick, but make no mistake: Highlights is at the core of how Google will address the Brave New World of Wearables and the torrent of data that world will involve. And by the end of it, Google is going to know you and your experiences even better than you know them yourself.
Lifelogging isn’t new – Microsoft Research’s Gordon Bell, for instance, has been sporting a wearable camera and tracking his life digitally since the early-2000s – but its component parts are finally coalescing into something the mainstream could handle. Cheap camera technology – sufficiently power-frugal to run all day, but still with sufficiently high resolution and bracketed with sensor data like location – has met plentiful cloud storage to handle the masses of photos and video.
More importantly, the public interest in recording and sharing memorable moments has flourished over the past few years, with Facebook over-sharing going from an embarrassment to commonplace, and Twitter and Tumblr evolving into stream-of-consciousness. For better or for worse, an event or occasion isn’t quite real enough for us unless we’re telling somebody else about it, preferably with the photos to prove it.
Into that arrives Glass. It’s not the only wearable project, and in fact it’s not even trying to immediately document your every movement, conversation, and activity. Out of the box, Glass doesn’t actually work as a lifelogger, at least not automatically. However, it hasn’t taken long before Explorer Edition users have tweaked the wearable to grant it those perpetual-memory skills, though we need to wait for Google’s part of the puzzle before we see the true shift take place.
Kickstarter project Memoto, which raised over half a million dollars for its wearable lifelogging camera that fires off two frames a minute all day, every day, isn’t really a hardware challenge – though the startup might disagree with that somewhat, given the slight delays caused by squeezing power-efficient camera tech into a tiny little geek-pendant – but a software one. The issue isn’t one of taking photos, or of storing them: it’s of then organizing them in a way that’s anywhere near manageable for the wearer.
Think about your last set of holiday photos. You probably took many more than you did in the days of traditional film cameras. Maybe you synchronized them with iPhoto, or uploaded them to a Dropbox or Picasa gallery. Perhaps they went on Facebook, either sorted through or – more likely, maybe – simply dumped en-masse. How many times have you looked through them, or shown them to somebody else?
Now, imagine having a whole day’s worth of photos to deal with. We’ll be conservative and assume you’re sleeping for eight hours – lucky you – and maybe have a couple of hours “privacy” time during which you’re showering, getting changed, or otherwise not camera-ready. Fourteen hours when you could be wearing your Memoto, then, or some other camera: 840 minutes, or 1,680 individual photos. In the course of a week, you’ve snapped 11,760 shots.
[aquote]By the end of the year you’ve got over four million photos[/aquote]
By the end of the year, you’ve got over four million of them. Sure, plenty of them will be of the same thing, or blurry because you were running across the road at the time, or too dark to make out details. Many, many of them will just be plain dull. But they’ll all be there, sitting in the cloud waiting to be looked at.
Nobody is going to sift through four million photos. And so the really clever thing the Memoto team is working on is the relevance processing all of those images are fed through. The exact details of the algorithm haven’t been confirmed – in fact it’s still something of a work-in-progress, and likely will be even when the first units start shipping out to Kickstarter backers – but it takes into account the location each image was taken at (there’s geotagging for each shot), the direction you’re facing, what interesting things are in the frame, and more.
That way, you get the best of both worlds, or at least in theory. “All photos are stored and organized for you,” Memoto promises. “None are deleted, but the best ones are more visible.”
As Memoto sees it, that all amounts to about thirty frames per day. Thirty potentially review-worthy shots out of more than sixteen-hundred. Now, there’s no way of knowing quite how well the system will actually operate, and we’re bound to miss out some gems and have out attention drawn to some duffers, but make no mistake: we need this layer of abstraction if lifelogging is to be more than just a boon for those selling hard-drives.
For a while, Google didn’t seem to have given managing the extra photos from wearables like Glass much consideration. In fact, the first evidence of photo sharing – automatically uploading to Google+, and being posted out with the generic #throughglass tag – was one of the more half-baked of the company’s implementations. That all changed, though, at I/O this week.
Google+ is the glue for Google’s ecosystem – what I call the “context ecosystem” – not least Glass; you may not want to use it as a social network, replacing or augmenting Facebook and Twitter, but if you want Google services or hardware you’re going to end up a Google+ user on some level. The new Highlights feature in Google+ is the key to unlocking Glass’ usefulness as a lifelogger.
“The Highlights tab helps you find photos you’ll want to share by automatically curating the images you upload to Google+ photos” Google explained. “Highlights works by de-emphasizing duplicates, blurry images, and poor exposures while focusing on pictures with the people you care about, landmarks, and other positive attributes.”
For the moment, for most users, Highlights is a way of quickly cutting out duplicated shots. Take three or four pictures of your kids in the park, just to make sure they were all looking at the camera at the right time? Google+ Highlights will make sure you only see one, not all of the nearly-identical frames. No need to delete the others, just – as Gmail taught us with achive-not-delete email, a privilege of copious space and effective search – hide them from regular sight.
As the flow of photos into Google+ turns into a torrent, fueled not least by wearables, those vague “other positive attributes” Google mentions will become most important, however. Highlights is going to become not only a curator of your galleries, but of how you reminisce; how you look back on what you did, where you did it, and who you did it with.
Google can already identify buildings, and locations, and people. It knows who your friends are. Factor in Events, and the communal photo sharing feature, and that will help Google+ fill in even more of the gaps. If it knows you were with your best friend, and your best friend was in Paris at the time, and what a number of famous Parisian landmarks look like, it’ll be able to do a pretty good job at piecing together a curated “holiday memories” album that’s probably more detailed than your own recollection of the trip.
[aquote]The comfort levels reported at I/O show this is not just old- versus new-school[/aquote]
If you’re clenching various parts of your anatomy over fears about privacy, you’re probably right to. Even with only about 2,000 Glass Explorer Edition headsets made, the degree of controversy over what the rights and responsibilities around having photos taken in public and in private are is already exponentially greater. Those at Google I/O this past week are undoubtedly a tech-savvy, open-minded bunch, but the range of comfort levels reported about being in the Glass gaze is a telling sign that there’s more to this than just old-school versus new-school.
The discussion is going to be broader than Google, of course – a Memoto camera is arguably more discrete, clipped to your coat or shirt, and it’s almost certainly not going to be the last wearable camera – but how the companies involved process the data created is likely to be the biggest factor, and Google has a track-record of giving privacy advocates sleepless nights.
If Glass – and wearables along with lifelogging in general – is to succeed, however, this is a discussion that will have to be settled. We’re not talking about “how okay” it is for your email account to talk to your calendar account. If the EU decides there should be a clear division between those in the name of user privacy, then you might have to manually create appointments based on email conversations; if the huge and inevitable rush of photos and video that wearables will facilitate aren’t addressed, then Glass and its ilk will stumble and fail. Our new digital brain needs permission to work its magic, but we’re still in the early days of seeing just how magical that might be.