Ever been annoyed by blurry faces in photos taken? Well, turn back time you cannot but you now have an alternative, thanks to AI. A team of researchers at Duke University have developed an AI tool that can transform blurry, indistinguishable images of people’s faces into very convincing computer-generated portraits, with better detail.
Existing technology is able to scale a facial image up to 8x its current resolution. However, the team has managed to use technology to utilize pixels and create realistic-looking faces with up to 64x the clarity, ‘imagining’ features such as wrinkles, eyelashes and facial hair that wasn’t there in the first place.
According to Duke computer scientist Cynthia Rudin, who led the team, this is a breakthrough.
“Never have super-resolution images been created at this resolution before with this much detail,” she said in an interview on the university’s website. However, it has its drawbacks. It is unable to say, turn an out-of-focus, unrecognizable photo from a security camera into a crystal clear image of a real person. However, it can create new faces of people that do not exist but appear reasonably realistic.
Known as PULSE, this technique could in theory shoot low-res shots of almost anything and create sharp, realistic-looking pictures, with applications ranging from medicine and microscopy to astronomy and satellite imagery.
The current method of doing something similar would involve taking a low-resolution image and ‘estimate’ what extra pixels are required by trying to get them to match, on average, with corresponding pixels in high-resolution images the computer has previously come across. However, as a result of this averaging, specific areas that are not smooth, say hair and skin might not be well aligned and end up looking blurry.
In the Duke initiative, the system goes through AI-generated examples of high-resolution facial images, and pick out the ones ones that look as much as possible like the input image when shrunk down to the same size.
A machine learning tool known as a “generative adversarial network,” or GAN, which are two neural networks trained on the same data set of photos was used. Working separately, one network brings up AI-created human faces that copy the ones it was trained on while the other looks at the results and decides if it could pass off as the real thing. Over time, the first network becomes so good that the other network can’t tell the difference.
With this, the computer can now develop photos with realistic-looking images from distorted, low quality input that other methods are unable to work with. By filling in the blanks, it is possible to have a variety of lifelike images that look like a different person but yet similar.
To illustrate the level of detail, imagine a 16×16 pixel facial image boosted to 1024 x 1024 pixels in a few seconds, adding more than a million pixels, akin to HD resolution. The viewer will be able to view details such as pores, wrinkles, and wisps of hair that were previously indistinct in the low-res photos crisply and clearly in the computer-generated versions.
This is one scaling method to count on high-quality photos of actual people.