Evaluating Exploratory Search Systems
 SIGIR 2006 Workshop 10 August 2006, Seattle, USA 

  Home
  Objectives
  Organizers
  Committee
  Important dates
  Submit papers
  Program
  SIGIR 2006
 
Objectives

Whilst search systems are expanding beyond supporting simple lookup into supporting complex information-seeking behaviors, there is no framework for how to evaluate this new genre of search system. This workshop aims to bring together researchers from communities such as information retrieval, library and information sciences, and human-computer interaction for a discussion of the issues related to the formative and summative evaluation of ESS. The focus in recent years has been on the development of new systems and interfaces, not how to evaluate them. Given the range of technology now available we must turn attention toward understanding the behaviors and preferences of searchers. For this reason we focus this workshop on issues in evaluation, arguably the single most pressing issue in the development of ESS.

The general aims of the workshop are to:

  • Define metrics to evaluate ESS performance
  • Establish what ESS should do well
  • Focus on the searcher, their tasks, goals and behaviors
  • Influence ESS designers to think more about evaluation
  • Facilitate comparability between sites and experiments
  • Discuss components for the non-interactive evaluation of ESS (e.g., searcher simulations)

Topics

We will encourage participation based on, but not limited to, the following topics:

  • learning
  • system log analysis
  • task-oriented evaluation
  • ethnography and field studies
  • user performance and behaviors
  • searcher simulations
  • biometric data as evidence
  • role of context
  • metrics for ESS evaluation
  • ESS evaluation frameworks
  • mental models for exploratory search processes
  • test collections
  • novel exploratory search interfaces and interaction paradigms

 Created and maintained by Ryen White (ryen@umd.edu) Last modified: 4 April 2006