This is the video rendering of an edited version of facial motion capture recorded out of Faceshift. The animation required rekeying in Maya to achieve credible expressions in many instances, but the foundation animation exported from Faceshift worked well once several dozen new keys were set. Rekeying was done on new animation layers blended with the Faceshift export. The animation is meant as a test to see whether a realistic infant avatar could be created by retargeting facial expressions from an adult actor, since babies exhibit very different preverbal communication patterns than socialized adults. It was produced as part of a final project for Chris Bregler's Spring 2013 Motion Capture class at New York University.
This is a facial expression motion capture test done as part of a project for Chris Bregler's Spring 2013 Motion Capture class at New York University. The capture and retargeting are being done in real time using Faceshift. The tracking profile was trained using the Faceshift standard expressions with the default fits applied. Creating custom fits and expressions resulted in poorer quality tracking. The infant model was purchased from TurboSquid, with rigging modification done by Anna Mabarak.