Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Learning optimally diverse rankings over large document collections

Alex Slivkins, Filip Radlinski, and Sreenivas Gollapudi

Abstract

Most learning to rank research has assumed that the utility of presenting different documents to users is independent, producing learned ranking functions that often return redundant results. The few approaches that avoid this repetition have rather unsatisfyingly lacked theoretical foundations, or do not scale. We present a learning-to-rank formulation that optimizes the fraction of satisfied users, with a scalable algorithm that also explicitly takes document similarity and ranking context into account. We present theoretical justifications for this approach as well as a near-optimal algorithm. Our evaluation adds optimizations that improve empirical performance, and shows that our algorithms learn orders of magnitude more quickly than the previous approaches.

Details

Publication typeInproceedings
Published inProc. of the 27th International Conference on Machine Learning (ICML 2010)
URLhttp://arxiv.org/abs/1005.5197
> Publications > Learning optimally diverse rankings over large document collections