The iPhone 11 is here, and launching alongside it is the iPhone 11 Pro. The iPhone 11 Pro’s triple camera array received a lot of attention from Apple during the company’s reveal event today, with Phil Schiller teasing one intriguing feature in particular. That feature is called Deep Fusion, and while Schiller didn’t get too specific about its capabilities, his tease was definitely enough to pique our interest.
The goal of Deep Fusion, it seems, it to construct an image that’s as sharp as possible. With the feature, the cameras on the back of the 11 Pro will take a total of 9 different images, one of which is a long exposure shot. After that, the 11 Pro puts its neural engine muscle to use, building a new image pixel-by-pixel.
Schiller himself seems pretty excited about this feature. “It is computational photography mad science,” he said on stage today. “It is way cool.” He used a photo of a man wearing a patterned sweater as an example of what Deep Fusion is capable of, zooming in on the sweater’s pattern to show very little loss of detail.
Of course, a stage demonstration is a lot different from a real-world test, so we’ll have to wait until this tech is widely available to determine how good it actually is. For that matter, we’re left waiting on Apple for more details about how Deep Fusion will work, because Schiller spent a relatively short amount of time talking about it on stage today.
We do know that the iPhone 11 will be outfitted with three different cameras on the back: a 12MP wide camera, a 12MP ultra wide camera, and 12MP telephoto camera. We’ll see how the iPhone 11 Pro’s triple cameras work soon enough, as these new iPhones are all launching on September 20th.