Twitter neural network makes image previews more attractive

Twitter is rolling out an improved image preview experience, one that better crops images so that they show the contents of the photos rather than just whatever lies in the center. This improvement is thanks to hardworking Twitter engineers and a trained neural network, the latter of which has learned how to identify the parts of an image someone is most likely to look at.

Twitter is a great way to share images with a lot of people, but the service doesn't always offer the most attractive presentation. In order to provide a clean look and consistency despite many different possible image size ratios, Twitter crops all images into neat boxes that show only an image preview.

If the images contained a face, the preview would usually do a good job of focusing on that face, making selfies and such appear fine while in preview. Many images don't contain faces, of course, and some with faces aren't too clear. In those cases, Twitter's system would instead crop the image around its center portion...regardless of what may or may not be there.

Twitter researchers have figured out how to solve this problem without compromising the platform's image sharing speed: a neural network system that can determine the most interesting parts of an image as soon as the user uploads it. These regions usually involve text, people, prominent objects, faces, and more.

The experts called these "high saliency" points, explaining that thanks to a neural network trained to identify them, Twitter will start displaying better image previews via more intelligent cropping. Rather than seeing an image preview of a road and the bottom half of a dog, for example, the preview would instead focus on the dog even if it's not in the center of the photo.

SOURCE: Twitter