Researchers Develop Advanced Interactive 3D Face Technology

University of Washington
University of Washington

University of Washington researchers have developed a system that can scour the web for more than 200 pictures of a celebrity, assemble the images together as a reference, and then animate the celebrity into an advanced 3D face, even capturing significant facial motions and lip movement.

A demonstration of the technology is currently available as a video on YouTube, and demonstrates examples including Tom Hanks, George W. Bush, Barack Obama, and Daniel Craig. They also demonstrated the ability for animated faces to process and speak the audio tracks from YouTube videos, even showing an animated Obama speak sentences recorded by Bush.

The technology also reportedly tries to gather pictures from the celebrity at different ages in order to better map the face and have the ability to ‘age’ the celebrity in 3D form. It is also possible for a certain face to map another face’s mannerisms and motions without impacting the natural look of the original face.

University of Washington

University of Washington

Researchers on the project aren’t looking to develop a celebrity gimmick or toy though. “You might one day be able to put on a pair of augmented reality glasses and there is a 3-D model of your mother on the couch,” said senior author Kemelmacher-Shlizerman to UWToday, detailing their plans to move to 3D puppets of family members made out of personal photographs and home videos.

“Imagine being able to have a conversation with anyone you can’t actually get to meet in person,” said Steve Weitz, a computer science professor at the University. “We’re trying to get there through a series of research steps. One of the true tests is can you have them say things that they didn’t say but it still feels like them? This paper is demonstrating that ability.”

Those on the team have also been boasting the lack of need for complex camera and face-tracking equipment, due to the technology’s process of scouring through already present photographs and visual material. “We asked, ‘Can you take Internet photos or your personal photo collection and animate a model without having that person interact with a camera?’” said Kemelmacher-Shlizerman to UWToday. “Over the years we created algorithms that work with this kind of unconstrained data, which is a big deal.”

The demonstration of the 3D animation technology is to be presented in Chile at the International Conference on Computer Vision this month, under the name “What Makes Tom Hanks Look Like Tom Hanks.” Funded by mega corporations like Intel, Google, and Samsung, it appears the technology will be under a lot of eyes in the tech world.

Charlie Nash is a libertarian writer, memeologist, and child prodigy. When he is not writing, he can usually be found chilling at the Korova Milk Bar, mingling with the infamous. You can follow him on Twitter at @MrNashington.


Please let us know if you're having issues with commenting.