Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Robust and Rapid Generation of Animated Faces from Video Images: A Model-Based Modeling Approach

Zhengyou Zhang, Zicheng Liu, Dennis Adler, Michael Cohen, Erik Hanson, and Ying Shan

Abstract

We have developed an easy-to-use and cost-effective system to construct textured 3D animated face models from videos with minimal user interaction. This is a particularly challenging task for faces due to a lack of prominent textures. We develop a robust system by following a model-based approach: we make full use of generic knowledge of faces in head motion de-termination, head tracking, model fitting, and multiple-view bundle adjustment. Our system first takes, with an ordinary video camera, images of a face of a person sitting in front of the camera turning their head from one side to the other. After five manual clicks on two images to indicate the position of the eye corners, nose tip and mouth corners, the system automat-ically generates a realistic looking 3D human head model that can be animated immediately (different poses, facial expressions and talking). A user, with a PC and a video camera, can use our system to generate his/her face model in a few minutes. The face model can then be imported in his/her favorite game, and the user sees themselves and their friends take part in the game they are playing. We have demonstrated the system on a laptop computer live at many events, and constructed face models for hundreds of people. It works robustly under various environment settings.

Details

Publication typeTechReport
NumberMSR-TR-2001-101
Pages32
InstitutionMicrosoft Research
> Publications > Robust and Rapid Generation of Animated Faces from Video Images: A Model-Based Modeling Approach