Google Brain’s neural networks are now capable of “hallucinating” images from heavily distorted pictures.
What was once the exclusive province of cheesy science fiction is now reality. Pixel Recursive Super Resolution “synthesizes realistic details into images while enhancing their resolution,” allows for the digital “enhancement” of images that are little more than a mess of pixels, using the AI’s knowledge of facial structure to “make hard decisions” about what the image represents.
The research document describes the challenges of creating a complex image from a simple one. The essential argument is one that has been used to debunk photographic “enhancement” in movies for decades: You can’t add information to a picture that isn’t already present. You can’t pull details from an image that aren’t already present, for the same reason you can’t put a photograph under a microscope to examine the skin cells of the person in it.
So how does this “Pixel Recursive Super Resolution” work? It all comes down to educated guesswork.
The first part of the AI was “trained” by viewing countless images of human faces; the second by calculating all possible 32×32 pixel variations of an 8×8 pixel image. The two systems were then married to produce the most mathematically reasonable guess at what the image might be. The system uses color and placement to shape facial features.
Sometimes, it even succeeds.
So far, PRSR only creates an image that can fool a human into thinking it’s an actual higher-resolution version of the image about 10% of the time when it’s a person’s face. That jumps to about 30% when the picture is just furniture. Most of the time, it’s not quite right. Sometimes, it’s closer to Jackson Pollock than Blade Runner. But it’s a step forward, regardless.
It’s just another permutation of Google’s machine learning projects. Another similar venture uses “Rapid and Accurate Super Resolution,” or RAISR, to make similarly educated guesses at filling in images, allowing them to be compressed even further without losing clarity. It’s already being used on “more than 1 billion images per week,” reducing affected users’ bandwidth by around 33%.
In the future, maybe we won’t scoff when some exaggerated geeky persona on television tells his computer to “enhance.”
Follow Nate Church @Get2Church on Twitter for the latest news in gaming and technology, and snarky opinions on both.