SIGIR 2007 Workshop: Learning to Rank for Information Retrieval

Overview

The task of "learning to rank" has emerged as an active and growing area of research both in information retrieval and machine learning. The goal is to design and apply methods to automatically learn a function from training data, such that the function can sort objects (e.g., documents) according to their degrees of relevance, preference, or importance as defined in a specific application.

The relevance of this task for IR is without question, because many IR problems are by nature ranking problems. Improved algorithms for learning ranking functions promise improved retrieval quality and less of a need for manual parameter adaptation. In this way, many IR technologies can be potentially enhanced by using learning to rank techniques.

The main purpose of this workshop, in conjunction with SIGIR 2007, is to bring together IR researchers and ML researchers working on or interested in the technologies, and let them to share their latest research results, to express their opinions on the related issues, and to discuss future directions.


Topics of Interests

We solicit submissions on any aspect of learning to rank for information retrieval. Particular areas of interest include, but are not limited to:

 

l  Models, features, and algorithms of learning to rank

l  Evaluation methods for learning to rank

l  Data creation methods for learning to rank

l  Applications of learning to rank methods to information retrieval

l  Comparison between traditional approaches and learning approaches to ranking

l  Theoretical analyses on learning to rank

l  Empirical comparison between learning to rank methods

 

Shared Benchmark Data

Several shared data sets have been released from Microsoft Research Asia (http://research.microsoft.com/users/tyliu/LETOR/). The data sets, created based on OHSUMED and TREC data, contain features and relevance judgments for training and evaluation of learning to rank methods. It is encouraged to use the data sets to conduct experiments in the submissions to the workshop.

 

We are organizing a second workshop on learning to rank at SIGIR 2008.

Paper Submission

Papers should be submitted electronically via the submission site (https://cmt.research.microsoft.com/LR4IR2007/). Submitted papers should be in the ACM Conference style, see the ACM template page, and may not exceed 8 pages. All submissions will be reviewed by at least three members of the program committee. The review is double-blind; please anonymize your submission. All accepted papers will be published in the proceedings of the workshop. The proceedings will be printed and made available at the workshop.

Paper submission has been closed on June 8. The review process has started.

Accepted Papers

LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval.
An Axiomatic Study of Learned Term-Weighting Schemes.
SoftRank: Optimising Non-Smooth Rank Metrics.
Learning to Rank with Pairwise Regularized Least-Squares.
Learning to Rank Documents for Ad-Hoc Retrieval with Regularized Models.
Learning to Rank for Information Retrieval Using Genetic Programming.
Addressing Malicious Noise in Clickthrough Data.
Efficient Query Delegation by Detecting Redundant Retrieval Strategies.

Program (July 27)

09:00~09:40: Keynote Speech I

                    Learning about Ranking and Retrieval Models
                  
 W. Bruce Croft
09:45~10:30: Paper Session I: Experience Sharing
                    a) LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval
                      
Tie-Yan Liu, Jun Xu, Tao Qin, Wenying Xiong, Hang Li
                    b) An Axiomatic Study of Learned Term-Weighting Schemes
                        Ronan Cummins, Colm O’Riordan
10:30~11:00: Coffee break
11:00~12:30: Paper Session II: Algorithms
                    a) SoftRank: Optimising Non-Smooth Rank Metrics
                    Michael Taylor, John Guiver, Stephen Robertson, Tom Minka
                    b) Learning to Rank with Pairwise Regularized Least-Squares
                    Tapio Pahikkala, Evgeni Tsivtsivadze, Antti Airola, Jorma Boberg, Tapio Salakoski
                    c) Learning to Rank Documents for Ad-Hoc Retrieval with Regularized Models
                    Guihong Cao, Jian-Yun Nie, Luo Si, Jing Bai
                    d) Learning to Rank for Information Retrieval Using Genetic Programming
                    Jen-Yuan Yeh, Jung-Yi Lin, Hao-Ren Ke, Wei-Pang Yang
12:30~14:00: Lunch
14:00~14:40: Keynote Speech II

                    Learning to Rank for Web Search: Some New Directions
                    Christopher J. C. Burges
14:45~15:30: Paper Session III: Applications
                    a) Addressing Malicious Noise in Clickthrough Data
                        Filip Radlinski
                    b) Efficient Query Delegation by Detecting Redundant Retrieval Strategies

                        Christian Scheel, Nicolas Neubauer, Andreas Lommatzsch, Klaus Obermayer, Sahin Albayrak
15:30~16:00: Coffee break
16:00~17.00: Panel

Organizers / Co-chairs

Thorsten Joachims, Cornell Univ.

Hang Li, Microsoft Research Asia

Tie-Yan Liu, Microsoft Research Asia

ChengXiang Zhai, Univ. of Illinois at Urbana-Champaign

Program Committee

Eugene Agichtein, Emory University

Javed Aslam, Northeastern University

Chris Burges, Microsoft Research

Olivier Chapelle, Yahoo Research

Hsin-Hsi, Chen, National University of Taiwan

Bruce Croft, University of Massachusetts, Amherst

Ralph Herbrich, Microsoft Research Cambridge

Djoerd Hiemstra, University of Twente

Thomas Hofmann, Google

Rong Jin, Michigan State University

Paul Kantor, Rutgers University

Sathiya Keerthi, Yahoo Research

Ravi Kumar, Yahoo Research

Quov Le, Australian National University

Guy Lebanon, Prudue University

Donald Metzler, University Massachusetts

Einat Minkov, Carnegie Mellon University

Filip Radlinski, Cornell University       

Mehran Sahami, Google

Robert Schapire, Princeton University

Michael Taylor, Microsoft Research Cambridge

Yiming Yang, Carnegie Mellon University

Yi Zhang, University of California, Santa Cruz

Kai Yu, NEC Research Institute

Hongyuan Zha, Georgia Tech

Important Dates

  • Paper Submission Due: June 8
  • Author Notification Date: June 28
  • Camera Ready: July 5

Contact Us

tyliu [at] microsoft [dot] com

 

back to top