RankBoost on LETOR 4.0 

  

 

Introduction

Learning parameters

Papers&Docs

Notes

  

Introduction to RankBoost

The basic idea of RankBoost is to formalize learning to rank as a problem of binary classification on instance pairs, and then to adopt boosting approach. Like all boosting algorithms, RankBoost trains one weak ranker at each round of iteration, and combines these weak rankers as the final ranking function. After each round, the document pairs are re-weighted: it decreases the weight of correctly ranked pairs and increases the weight of wrongly ranked pairs.

The details of RankBoost can be found from this JMLR paper.

  

Learning Parameters


We define each weak ranker on the basis of a feature. With a proper threshold, the weak ranker has binary output, i.e., it takes values from {0, 1}. For each round, we select the best weark ranker from (# of features) x (255 thresholds) candidates.

The number of weak rankers is determined by cross validation.

Dataset

# of weak rankers (from Fold1 to Fold5)

MQ2007

163, 567, 491, 434, 195

MQ2008

220, 11, 56, 58, 34

 

Papers & Docs



 Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res., 4:933-969, 2003.

BibTex
@article{964285,
    author = {Yoav Freund and Raj Iyer and Robert E. Schapire and Yoram Singer  },
    title = {An efficient boosting algorithm for combining preferences},
    journal = {J. Mach. Learn. Res.},
    volume = {4},
    year = {2003},
    issn = {1533-7928},
    pages = {933--969},
    publisher = {MIT Press},
    address = {Cambridge, MA, USA},
}

Notes

This document was modified by Jun Xu, and the experiments were conducted by Jun Xu. If any problem, please contact letor@microsoft.com