Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Microsoft Learning to Rank Datasets

We release two large scale datasets for research on learning to rank: MSLR-WEB30k with more than 30,000 queries and a random sampling of it MSLR-WEB10K with 10,000 queries.

Dataset Descriptions

The datasets are machine learning data, in which queries and urls are represented by IDs. The datasets consist of feature vectors extracted from query-url pairs along with relevance judgment labels:

(1) The relevance judgments are obtained from a retired labeling set of a commercial web search engine (Microsoft Bing), which take 5 values from 0 (irrelevant) to 4 (perfectly relevant).

(2) The features are basically extracted by us, and are those widely used in the research community.

In the data files, each row corresponds to a query-url pair. The first column is relevance label of the pair, the second column is query id, and the following columns are features. The larger value the relevance label has, the more relevant the query-url pair is. A query-url pair is represented by a 136-dimensional feature vector. The details of features can be found here.

Below are two rows from MSLR-WEB10K dataset:
=============================================================
0 qid:1 1:3 2:0 3:2 4:2 ... 135:0 136:0
2 qid:1 1:3 2:3 3:0 4:0 ... 135:0 136:0
=============================================================

Dataset Partition

We have partitioned each dataset into five parts with about the same number of queries, denoted as S1, S2, S3, S4, and S5, for five-fold cross validation. In each fold, we propose using three parts for training, one part for validation, and the remaining part for test (see the following table). The training set is used to learn ranking models. The validation set is used to tune the hyper parameters of the learning algorithms, such as the number of iterations in RankBoost and the combination coefficient in the objective function of Ranking SVM. The test set is used to evaluate the performance of the learned ranking models.

Folds

Training set

Validation set

Test set

Fold1

{S1,S2,S3}

S4

S5

Fold2

{S2,S3,S4}

S5

S1

Fold3

{S3,S4,S5}

S1

S2

Fold4

{S4,S5,S1}

S2

S3

Fold5

{S5,S1,S2}

S3

S4

 

Release Notes

  • The following people have contributed to the construction of the data: Tao Qin, Tie-Yan Liu, Wenkui Ding, Jun Xu, Hang Li.
  • We would like to thank Bing team for the support in dataset creation. We would also like to thank Nick Craswell for the help in dataset release.
  • If you have any questions or suggestions, please kindly let us know.