Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Learning to Rank Using Classification and Gradient Boosting

P. Li, C.J.C. Burges, and Q. Wu

Abstract

We cast the ranking problem as (1) multiple classi cation (2) multiple ordinal classi cation, which lead to computationally tractable learning algorithms for relevance ranking in Web search. We consider the DCG criterion (discounted cumulative gain), a standard quality measure in information retrieval. Our approach is motivated by the fact that perfect classi cations naturally result in perfect DCG scores and the DCG errors are bounded by classi cation errors. We propose using the Expected Relevance to convert the class probabilities into ranking scores. The class probabilities are learned using a gradient boosting tree algorithm. Evaluations on large-scale datasets show that our approach can improve LambdaRank [5] and the regressions-based ranker [6], in terms of the (normalized) DCG scores.

Details

Publication typeInproceedings
Published inAdvances in Neural Information Processing Systems 20
Pages0
NumberMSR-TR-2007-74
InstitutionMicrosoft Research
PublisherMIT Press, Cambridge, MA
> Publications > Learning to Rank Using Classification and Gradient Boosting