One of the major visual complications confronting face recognition is pose variation. It is generally perceived that a part based representation for faces would be more robust to such pose variations. Instead of adopting a set of hand-crafted parts, we take a data driven approach to a probabilistically aligned part model, namely probabilistic elastic part (PEP) model. The model is achieved by fitting a spatial appearance Gaussian mixture model (GMM) on dense local features extracted from a set of pose variant face images.
For a single face image or a track of face images, each mixture component of the learned spatial appearance GMM selects one local feature which induced the highest probability on it. These selected local features are concatenated to form the final pose invariant representation, namely the PEP representation. We apply the PEP representation for both unconstrained face verification and unsupervised face detector adaptation. For face verification, the PEP model achieved the highest verification accuracy on both the Labeled face in the Wild (currently ranked NO.2) and the YouTube Video face datasets (Currently ranked NO.1). For unsupervised face detector adaptation, we observed significant detection performance improvement adapting two state-of-the-art face detectors on three different datasets.