Rong Xiao, Wujun Li, Yuandong Tian, and Xiaoou Tang
A fundamental challenge in face recognition lies in determining what facial features are important for the identification of faces. In this paper, a novel face recognition framework is proposed to address this problem. In our framework, 3D face models are used to synthesize a huge database of realistic face images which covers wide appearance variations of faces due to various pose, illumination, and expression changes. A novel feature selection algorithm which we call Joint Boosting is developed to extract discriminative face features using this massive database. The major contributions of this paper are: (1) With the help of 3D face models, a massive database of realistic virtual face images is generated to achieve robust feature selection; (2)Because the huge database covers a wide range of face variations, our feature selection procedure only needs to be trained once, and the selected feature set can be generalized to other face database without re-training; (3) We propose a new learning algorithm, Joint Boosting Algorithm, which is effective and efficient in learning directly from a massive database without having to convert face images to intra-personal and extra-personal difference images. This property is important for applying our algorithm to other general pattern recognition problems. Experimental results show that our method significantly improves recognition performance.
In Proceedings of Computer Vision and Pattern Recognition (CVPR06)
Publisher Association for Computing Machinery, Inc.
Copyright © 2004 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or email@example.com. The definitive version of this paper can be found at ACM’s Digital Library –http://www.acm.org/dl/.