Workshop Overview

The goal of this workshop is to explore how the type of response elicited from assessors and users influences the evaluation and analysis of retrieval and filtering applications. For example, research suggests that asking people which of two results they prefer is faster and more reliable than asking them to make absolute judgments about the relevance of each result. Similarly, many researchers are using implicit measures, such as clicks, to evaluate systems. New methods like preference judgments or usage data require learning methods, evaluation measures, and collection procedures designed for them. This workshop will address research challenges at the intersection of novel measures of relevance, novel learning methods, and core evaluation issues. Therefore, we encourage participants from a variety of backgrounds including theory, experimental analysis, and research and deployed applications. We encourage the submission of new research that extends traditional relevance assessment in evaluation, machine learning, collaborative filtering, and user modeling.

Paul N. Bennett, Microsoft Research (Chair & Contact person)

 

Ben Carterette, University of Massachusetts Amherst

 

Olivier Chapelle, Yahoo! Research

 

Thorsten Joachims, Cornell University

Organizers:

Important Dates

             30 May:  Paper submission deadline

 

             23 June:  Notification of acceptance

 

             30 June:  Camera-ready papers due

 

             24 July:  Workshop

 

For details on paper submission, please see the Call for Participation.

SIGIR 2008 Workshop                                        July 24, 2008, Singapore

Beyond Binary Relevance:

Preferences, Diversity, and Set-Level Judgments