AI Weekly #8: AI Tool turns blurry, pixelated images into HD photos in seconds
In the past, methods of scaling an image in its quality could improve it only up to eight times. But researchers from Duke University, a private university in Durham, North Carolina, have developed a solution that can improve the quality up to 64 times! As an example, a picture that has 16×16 pixels can be boosted to have an impressive 1024×1024 pixels. What this means is that, an image that is so grainy that facial features would be unrecognizable, can become photographs of HD quality. As an example, the .gif below shows the before-and-after of using this tool:
Facial characteristics including eyes and mouth can barely be found in the blurry picture on the left, whereas the photo on the right shows the after effects – and it’s all due to AI.
The method used to implement such improvements is called PULSE; for detailed information of the project, go to the link attached here. Older methods take a low-resolution image to ‘see’ exactly which missing pixels are required to make them to fit in an attempt to generate the end result. The pixels used to “match” have been previously learned by the computer. Because of this “guessing” which pixels are the best fit, the resulting photos looked distorted and blurry in some areas. In the end, the mismatching of elements was visible, as groups of neighboring pixels didn’t correlate with other areas of the picture.
However, this outdated approach was improved by the Duke University team. Instead of using a low-quality photo and adding details, the PULSE system scans high-quality images of faces with the use of AI, and searches for photos that would look almost the same as the input photo if they were downgraded in their quality. PULSE may generate realistic-looking pictures from chaotic, poor-quality ones, in a way that is unachievable by any other tool, thus setting it apart as a potentially revolutionary photo-enhancing method.
The machine learning tool used for the photo generation is called the Generative Adversarial Network. It is made up of two networks trained within one set of photos. Firstly, one network generates AI faces that resemble those on which it has been trained, while the other network takes this output and decides whether it is close enough to any image from the set. This process is repeated until the second network cannot detect any discrepancies.
Why this matters
As mentioned above, PULSE could set a new precedent on how low-definition photographs are enhanced, and this can have potential use in various fields. Nevertheless, the project leaders have claimed that the program can not be used to recognise individuals, say, from a security camera video still, as it is not the project’s goal to “clear up” the quality. Rather, it is used to forge new faces by pixel selection (in the method described), and those faces are not of real, existing people, but look convincingly realistic. So it won’t make the identity of a criminal caught on camera be known, but could help in facial recognition forensics during investigations.
PULSE’s main focus were human faces, but this does not scrap the possibility of taking poor-quality pictures of almost everything and produce clear, accurate ones, with uses varying from medicine and microscopy to astronomy and satellite imagery, according to project co-author Sachit Menon. That fact in and of itself should spell a change in how many fields of science can be impacted.
As a side note, the researchers will present PULSE at the 2020 Conference on Computer Vision and Pattern Recognition (CVPR), set to take place remotely from June 14 to June 19.