Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Large Scale Validation and Analysis of Interleaved Search Evaluation

Olivier Chapelle, Thorsten Joachims, Filip Radlinski, and Yisong Yue

Abstract

Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.

Details

Publication typeArticle
Published inTransactions on Information Systems (TOIS)
URLhttp://dl.acm.org/citation.cfm?id=2094078
Volume30
Number1
PublisherACM
> Publications > Large Scale Validation and Analysis of Interleaved Search Evaluation