Detecting Objects using Deformation Dictionaries

Bharath Hariharan, C. Lawrence Zitnick, and Piotr Dollár

Abstract

Several popular and effective object detectors separately model intra-class variations arising from deformations and appearance changes. This reduces model complexity while enabling the detection of objects across changes in viewpoint, object pose, etc. The Deformable Part Model (DPM) is perhaps the most successful such model to date. A common assumption is that the exponential number of templates enabled by a DPM is critical to its success. In this paper, we show the counter-intuitive result that it is possible to achieve similar accuracy using a small dictionary of deformations. Each component in our model is represented by a single HOG template and a dictionary of flow fields that determine the deformations the template may undergo. While the number of candidate deformations is dramatically fewer than that for a DPM, the deformed templates tend to be plausible and interpretable. In addition, we discover that the set of deformation bases is actually transferable across object categories and that learning shared bases across similar categories can boost accuracy.

Details

Publication typeInproceedings
Published inCVPR
URLhttp://vision.ucsd.edu/~pdollar/research.html
PublisherComputer Vision and Pattern Recognition
> Publications > Detecting Objects using Deformation Dictionaries