Mustafa Bilgic and Paul N. Bennett
Methods that reduce the amount of labeled data needed for training have focused more on selecting which documents to label than on which queries should be labeled. One exception to this  uses expected loss optimization (ELO) to estimate which queries should be selected but is limited to rankers that predict absolute graded relevance. In this work, we demonstrate how to easily adapt ELO to work with any ranker and show that estimating expected loss in DCG is more robust than NDCG even when the fifinal performance measure is NDCG.
|Published in||Poster-Paper in Proceedings of the 35th Annual ACM SIGIR Conference (SIGIR 2012).|