Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Robust Ranking Models via Risk-Sensitive Optimization

Lidan Wang, Paul N. Bennett, and Kevyn Collins-Thompson

Abstract

Many techniques for improving search result quality have been proposed. Typically, these techniques increase average effectiveness by devising advanced ranking features and/or by developing sophisticated learning to rank algorithms. However, while these approaches typically improve average performance of search results relative to simple baselines, they often ignore the important issue of robustness. That is, although achieving an average gain overall, the new models often hurt performance on many queries. This limits their application in real-world retrieval scenarios. Given that robustness is an important measure that can negatively impact user satisfaction, we present a unified framework for jointly optimizing effectiveness and robustness. We propose an objective that captures the tradeoff between these two competing measures and demonstrate how we can jointly optimize for these two measures in a principled learning framework. Experiments indicate that ranking models learned this way significantly decreased the worst ranking failures while maintaining strong average effectiveness on par with current state-of-the-art models.

Details

Publication typeInproceedings
Published in Proceedings of the 35th Annual ACM SIGIR Conference (SIGIR 2012)
URLhttp://research.microsoft.com/en-us/um/people/pauben/papers/wang-et-al-sigir-2012.pdf
PublisherACM
> Publications > Robust Ranking Models via Risk-Sensitive Optimization