Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Expressive Speech-Driven Facial Animation

Speaker  Yong Cao

Affiliation  Virginia Tech

Host  John Nordlinger

Duration  00:52:28

Date recorded  13 September 2007

Computer graphics and animation is a very broad and multidisciplinary area of research. It serves as a visual tool for other areas of research, such as entertainment, scientific visualization and medical imaging. In addition, computer graphics also renovates its own research and technologies, inspiring artists and designers to create novel human-computer interactions.

While being interested in a wide range of problems involving graphics, my current focus is character animation, especially facial animation. In computer animation research, automatically synthesizing realistic facial animation remains a very challenging problem. Little has been done in the following two issues: real-time lip-syncing and modeling of expressive visual speech.

In this talk, I will present a data-driven approach for addressing these two issues. Firstly, I will present a greedy graph search algorithm for real-time lip-syncing. It yields vastly superior performance and allows real-time motion synthesis from a large database of motions. Secondly, I will describe a machine learning approach to model expressive visual behavior during speech.

©2007 Microsoft Corporation. All rights reserved.
People also watched
> Expressive Speech-Driven Facial Animation