Learning to Re-rank: Query-dependent Image Re-ranking using Click Data

Our objective is to improve the performance of keyword based image search engines by re-ranking their baseline results. We address three limitations of existing search engines in this paper. First, there is no straight-forward, fully automated way of going from textual queries to visual features. Image search engines are therefore forced to rely on static and textual features alone for ranking. Visual features are used only for secondary tasks such as finding similar images. Second, image rankers are trained on query-image pairs labeled with relevance judgements determined by human experts. Such labels are well known to be noisy due to various factors including ambiguous queries, unknown user intent and subjectivity in human judgements. This leads to learning a sub-optimal ranker. Finally, a static ranker is typically built to handle disparate user queries. The ranker is therefore unable to adapt its parameters to suit the query at hand which again leads to sub-optimal results. All these problems can be mitigated by incorporating a second re-ranking stage leveraging user click data.

We hypothesise that images clicked in response to a query are mostly relevant to the query. We therefore aim to re-rank the original search results so as to promote images that are likely to be clicked to the top of the ranked list. This is achieved by using Gaussian Process regression to predict the normalised click count for each image. Re-ranking is then carried out based on the predicted click counts and the original ranking scores. It is demonstrated that the proposed algorithm can significantly boost the performance of a baseline search engine such as Bing image search.

Date:
Speakers:
Manik Varma