Facebook can now use artificial intelligence to alter user selfies to fix common issues such as blinking during a photo.
A recently published research paper by two Facebook engineers outlines how a new method of machine learning can edit photos to open a person’s eyes in a photo where their eyes were originally closed, the Verge reports. According to the paper, this method is currently only at the research stage and there are currently no signs that Facebook will introduce this as a feature on their website, but it would be unsurprising if they chose to do so.
The technique works using a machine learning method known as generative adversarial network or GAN, the system uses photos of an individual with their eyes open to “train” itself to be able to alter the individual’s closed eyes in another photo, this process is referred to as “in-painting.” The method is not perfected yet and the software struggles with photos of people with glasses, long hair, or faces at an extreme angle, but as can be seen in the example below when it works, it works very well.
The research paper from two Facebook engineers reads:
This paper introduces a novel approach to in-painting where the identity of the object to remove or change is preserved and accounted for at inference time: Exemplar GANs (ExGANs). ExGANs are a type of conditional GAN that utilize exemplar information to produce high-quality, personalized in-painting results. We propose using exemplar information in the form of a reference image of the region to in-paint, or a perceptual code describing that object. Unlike previous conditional GAN formulations, this extra information can be inserted at multiple points within the adversarial network, thus increasing its descriptive power. We show that ExGANs can produce photo-realistic personalized in-painting results that are both perceptually and semantically plausible by applying them to the task of closed-to-open eye in-painting in natural pictures. A new benchmark dataset is also introduced for the task of eye in-painting for future comparisons.
The full research paper can be downloaded here.