Large Scale Validation and Analysis of Interleaved Search Evaluation

Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the

agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.

In  Transactions on Information Systems (TOIS)

Publisher  ACM

Details

TypeArticle
URLhttp://dl.acm.org/citation.cfm?id=2094078
Volume30
Number1
> Publications > Large Scale Validation and Analysis of Interleaved Search Evaluation