Picking the Best Daisy

Local image descriptors that are highly discriminative,

computational efficient, and with low storage footprint have

long been a dream goal of computer vision research. In this

paper, we focus on learning such descriptors, which make

use of the DAISY configuration and are simple to compute

both sparsely and densely. We develop a new training set of

match/non-match image patches which improves on previous

work. We test a wide variety of gradient and steerable

filter based configurations and optimize over all parameters

to obtain low matching errors for the descriptors. We

further explore robust normalization, dimension reduction

and dynamic range reduction to increase the discriminative

power and yet reduce the storage requirement of the learned

descriptors. All these enable us to obtain highly efficient local

descriptors: e.g, 13:2% error at 13 bytes storage per descriptor,

compared with 26:1% error at 128 bytes for SIFT.

winder_hua_brown_cvpr09.pdf
PDF file

In  Computer Vision and Pattern Recognition

Publisher  IEEE Computer Society
Copyright © 2007 IEEE. Reprinted from IEEE Computer Society. This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Details

TypeInproceedings
URLhttp://www.cs.ubc.ca/~mbrown/patchdata/patchdata.html
> Publications > Picking the Best Daisy