Once upon a time, mobile megapixels were the key. In fact, not so many years ago, many questioned whether there was any value whatsoever in having a camera built into your phone. Convergence for its own sake is undeniably a problem in today’s crossover-soaked market, but it’s now hard to argue that photography and the modern smartphone aren’t a compelling match. As the technology – and the rivalry – has increased, though, so the simple megapixel has lost some of its clarity, and with the smartphone stakes never higher, there’s a revolution of sorts underway to reframe mobile imagery.
As ever, the primer. Each camera sensor is made up of a number of individual pixels. Each pixel captures light. In theory, at least, the more pixels, the more light captured, and the more data for your final image. For a while, 5-megapixels was the phone standard, en 8-megapixels. Now, we’re seeing handsets with 12-megapixels and more.
Not all pixels are created equal, however. For a start, size matters: a bigger pixel means it can capture more light in the same amount of time as a smaller pixel, or alternatively the same amount of light in less time. That means lower shutter speeds can be supported, and less blur from either fast-moving subjects or shaky hands.
The obvious route would be to put plenty of big pixels into a phone, but then you run into the bulk issue. While each individual pixel is small – usually under two microns square in the average phone camera sensor – combined they can demand a fair amount of space in your converged device. Factor in the lenses – themselves usually consisting of five different plastic elements, stacked together – and it becomes clear that smaller pixels make for an easier overall package.
That’s the route we’ve seen firms like Samsung, Sony, and LG take in recent smartphone generations. Samsung’s Galaxy S 4, for instance, offers 13-megapixel resolution in a form-factor not much bigger than the 8-megapixel-toting Galaxy S III it replaced; its pixels come in at 1.1 microns. Sony’s Xperia Z runs at the same resolution, but the Xperia Z1 we’re expecting to see debut next week is widely tipped to ramp that up to a heady 20-megapixels.
King of the megapixel phones right now is the Nokia Lumia 1020. Like its near-experimental predecessor, the 808 PureView, the Lumia 1020 has a whopping 41-megapixel sensor, demanding a hunchback of sorts to accommodate the camera assembly. Nokia’s pixels aren’t the largest out there, but the company does something different with them: called oversampling, it effectively treats clusters of proximate pixels as an extra-rich data source for each point in a more manageable resolution final image.
So, while by default you’ll get roughly 5-megapixel stills from the Lumia 1020, Nokia says the actual images are better quality than those from its 8- and 13-megapixel rivals. By comparing the data from the cluster of pixels, the PureView system can discard any obviously glitchy data (which might show up as noise on a different phone) and take the average of the rest to get the most accurate blue of the sky, for instance.
At the other end of the scale is the HTC One, with effectively 4-megapixels, though HTC’s UltraPixel technology actually has a few similarities with Nokia’s PureView. Where the Lumia effectively treats each cluster as a data-rich pixel, HTC simply uses bigger-than-average pixels. The roughly two micron pixels improve low-light performance and, HTC argues, unless you’re enlarging your smartphone shots hugely, you’ll not be done a disservice by having fewer megapixels versus the rest of the handset mainstream.
Each of the more alternative approaches has its secondary advantages. Nokia’s PureView system, for instance, offers a lossless digital zoom: you can get approximately 3x closer to the subject by sacrificing oversampling rather than that final 5-megapixel resolution, with the Lumia 1020 cropping out a full-res segment of the overall frame to give you the effect of a zoomed-in image.
HTC, meanwhile, offers Zoe: simultaneous burst photography and Full HD video capture. While other phones can certainly grab a still during video recording, they’re often at the same 1920 x 1080 resolution as the footage. HTC’s One can fire off 20 full-resolution stills in the space of a few seconds, all the time recording 1080p video.
Sensors are only the start of it, of course. Of equal importance is the lens through which the light is focused – bad optics means glitchy images or aberrations around the periphery of the frame – and how the camera assembly as a whole is supported. Nokia and HTC, for instance, offer optical image stabilization (OIS) on certain models, where the camera is physically shifted to offset movement from the user’s hand; Samsung and others opt for digital image stabilization, which is less physically bulky but can struggle to get the same smoothing results.
Then there’s the processing: what the phone does with the data the sensor captures. Each camera is tuned in different ways according to the tastes of the manufacturer; Samsung has its reputation for amping up the color saturation, for instance. A shot processed to your tastes will usually be preferable to one of higher resolution but differently finessed; not for nothing has the iPhone 5‘s camera, while only 8-megapixels on paper, gathered a loyal and vocal following thanks to the capabilities of Apple’s imagery DSP.
Where once, then, the obvious route to “improving” a phone camera was to climb the megapixel scale, just as with the diminishing returns from faster processors with more cores, it’s no longer such a clear-cut decision for manufacturers. Instead, we’re seeing more evolved, considered approaches, some more outlandish than others.
Sony, for instance, has its experience in standalone digital cameras to call upon, and has been integrating its G-series lenses into its top-tier Xperia line for a generation or two now. The upcoming Xperia Z1 is already known to feature G-series optics, thanks to Sony’s inability to stop teasing about the upcoming phone even as its debut nears at IFA. Other rumors include 4K video recording support, which would tie neatly into Sony’s Ultra HD TV business, where sets are already on sale but content is in relatively short supply.
Samsung, on the other hand, appears unwilling to give up the high megapixel counts that have climbed with recent Galaxy S and Note models – little surprise since big numbers invariably look better, on paper at least, than smaller. Still, the company is believed to be working on a new optical image stabilization system that, though not expected to show up until Samsung’s next big flagship, will reportedly manage to accommodate both OIS and around a 16-megapixel sensor.
Apple too is looking to refine the camera in its new iPhone 5S, expected to arrive next month, though optical stabilization may not be its chosen route. Instead, there are whispers of a new motion-sensing chip in the refreshed flagship, and while its full purpose is unknown, one possibility is that it could be used to improve the anti-shake processing without requiring the iPhone 5S accommodate a full OIS system.
We’re not short of more outlandish routes, however. Samsung led the charge in the so-called “connected camera” segment, convergence pushing together the sort of wireless radios you might more commonly expect to see in a phone or tablet with the photography features from a traditional point-and-shoot camera. The upshot is a camera that can instantly share stills and video online without requiring a computer in-between; still. Samsung saw some criticism about the quality of the sensors its connected cameras used, and so retaliated with the 4G-toting Galaxy NX, not just a point-and-shoot but a proper interchangeable-lens model.
If you’re swapping lenses, though, why not make the lens itself intelligent? Sony is allegedly doing just that, with the rumored Lens Cameras apparently set to arrive at IFA 2013 next week. Where convergence brings the connected camera together, Sony’s strategy is seemingly to break them apart again, putting more capable optics and sensors into a chunky barrel section that can be clipped to a smartphone when the handset’s own camera falls short of expectations.
That way, the phone’s touchscreen, broad connectivity, app support, and existing spot in your pocket are all combined with an alleged improvement in picture quality. No redundancy, though, of a separate data plan for your camera, or a whole separate OS and app suite to buy, learn, and keep updated.
Could smart modularity be the solution to mobile photography where the demands for better images from more compact devices bump up against the limitations – physical and of physics itself – of camera technology?
The old adage that the best camera is the one you have with you may be a cliche, but it’s also true for the vast majority of users. Although there are some who wouldn’t dream of leaving the house without their DSLR, for most it’s a case of pulling their phone out of their pocket and grabbing a shot as soon, and as easily, as possible. That’s why features like OIS, faster camera app load times, swifter saving so you can move onto the next image, better low-light performance, and similar all make a big difference, since they address so readily the key needs of the mass market.
Still, there’s undoubtedly a place for the more unusual approaches, even if not all of them make it in the competitive device market. People will carry something else if the use-case is compelling, and efforts to challenge things like how zooming is offered, or how modularity splits the components of a connected camera and which should end up in each part, need only to prove their worth to find an audience in a world where life isn’t quite being lived if it isn’t being documented and shared online.
SlashGear’s IFA 2013 coverage will begin on Tuesday, September 3.