Speaker Sofien Bouaziz
Host Zicheng Liu and Rick Szeliski
Date recorded 11 July 2013
Recent advances in realtime performance capture have brought within reach a new form of human communication. Capturing dynamic facial expressions of a user and retargeting these expressions to a digital character in realtime allows enacting arbitrary virtual avatars with live feedback. Compared to communication via recorded video streams that only offer limited ability to alter one’s appearance, such technology opens the door to fascinating new applications in computer gaming, social networks, television, training, customer support, or other forms of online interactions. In this talk, I will present a new algorithm for realtime face tracking on commodity RGB-D sensing devices, e.g. Kinect. Our method requires no user-specific training or calibration, or any other form of manual assistance, thus enabling a range of new applications in performance-based facial animation and virtual interaction at the consumer level. Compelling 3D facial dynamics can be reconstructed in realtime without the use of face markers, intrusive lighting, or complex scanning hardware.
©2013 Microsoft Corporation. All rights reserved.