MIT's CGI AI could make movies better looking and easier to edit

Modern films are typically packed with all sorts of CGI effects that sometimes are so good viewers can't tell they are CGI at all. To get the CGI effects to look realistic is a difficult and time-consuming process. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new AI that can assist in image editing making the process faster and look better.

The challenge of human editors is determining in an image which part of the image is background and which is subject. The MIT team has designed a system to take an image and automatically decompose it into a set of layers. Each of these layers are separated by a series of "soft transitions" between layers. Those soft transitions are called "semantic soft segmentation" or SSS.

SSS can analyze the original image color and texture and combine it with info from a neural network about what the objects in the image are. Once the SSS are computed the user no longer has to manually change transitions or make individual modifications to the appearance of a layer in the image. This allows manual editing tasks like replacing backgrounds and adjusting colors to be done much more easily.

For now, SSS is focused on static images, but the team believes that it can be used for videos in the near future. That would allow the system to be used in filmmaking applications. MIT's team wants the system they invented to get to the point where a single click for editors will combine images to create fantasy worlds in film.

For now, SSS could be usable on social platforms like Instagram and Snapchat to make more realistic filters. MIT is working to shorten the time needed to compute an image from minutes to seconds.

SOURCE: MIT