Yuanhua Lv, Ariel Fuxman, and Ashok Chandra
Traditional IR applications assume that there is always enough space ("real estate'') available to display as many results as the system returns. Consequently, traditional evaluation metrics were typically designed to take a length cutoff k of the result list as a parameter. For example, one computes DCG@k, Prec@k, etc., based on the top-k results in the ranking list. However, there are important modern ranking applications where the result real estate is constrained to a small fixed space, such as the search verticals aggregated in the Web search results and the recommendation systems. For such applications, the following tradeoff arises: given a fixed amount of real estate, shall we show a small number of results with rich captions and details, or a larger number of results with less informative captions?
In other words, there is a tradeoff between the length of the result list (i.e., quantity) and the informativeness of the results (i.e., quality). This tradeoff has important implications for evaluation metrics, since it leads the length cutoff k hard to be determined a priori. In order to tackle this problem, we propose two desirable formal constraints to capture the heuristics of regulating the quantity-quality tradeoff, inspired by the axiomatic approach to IR. We then present a general method to normalize the well-known Discounted Cumulative Gain (DCG) metric for balancing the quantity-quality tradeoff, yielding a new metric, that we call Length-adjusted Discounted Cumulative Gain (LDCG). LDCG is shown to be able to automatically balance the length and the informativeness of a ranking list without requiring an explicit parameter k, while still preserving the good properties of DCG.
In Proceedings of 36th European Conference on Information Retrieval