SIGIR 2009 Workshop: Learning to Rank for Information Retrieval

Overview

As an interdisciplinary field between information retrieval and machine learning, learning to rank is concerned with automatically constructing a ranking model using training data. Learning to rank technologies have been successfully applied to many tasks in information retrieval such as search and summarization, and have been attracting more and more attention recently in the information retrieval and machine learning communities.

 

At SIGIR 2007 and SIGIR 2008, we have successfully organized two workshops on learning to rank for information retrieval. The reports of those two workshops can be found at http://www.sigir.org/forum/2007D/2007d_sigirforum_joachims.pdf and http://www.sigir.org/forum/2008D/sigirwksp/2008d_sigirforum_li.pdf. You can also find the website of our previous workshops at http://research.microsoft.com/users/LR4IR-2007/ , http://research.microsoft.com/users/LR4IR-2008/.

 

Topics of Interests

We solicit submissions on any aspect of learning to rank for information retrieval. Particular areas of interest include, but are not limited to:

l  Models, features, and algorithms of learning to rank

l  Evaluation methods for learning to rank

l  Data creation methods for learning to rank

l  Applications of learning to rank methods to information retrieval

l  Comparison between traditional approaches and learning approaches to ranking

l  Theoretical analyses on learning to rank

l  Empirical comparison between learning to rank methods

 

Shared Benchmark Data

During the workshop, the LETOR team will announce the version 4.0 of the LETOR dataset, which contains more queries, and enables new research topics.

Planned Activities

8:30-8:40: Opening remarks

8:40-9:40: Invited Talk - Learning to rank for diversity (Paul B. Kantor, Rutgers University)

9:40-10:30: Paper Session 1 (2 papers) - learning to rank methods

    Efficient and Accurate Local Learning for Ranking
    Learning to rank with low rank
10:30-11:00 break

11:00-12:15: Paper session 2 (3 papers) - learning to rank applications

    Learning to Rank QA Data
    Ranking Experts with Discriminative Probabilistic Models
    Priors in Web Search

12:15-12:30: LETOR 4.0 announcement

12:30-1:30: Lunch

1:30-2:30: Invited talk - Direct Optimization for Ranking (Olivier Chapelle, Yahoo! Research)

2:30-3:00:  LETOR feedback (for future versions)

3:00-3:30: Break

3:30-4:20: Paper session 3 (2 papers) - Evaluation of learning to rank

    Is learning to rank effective for Web search

    On the Choice of Effectiveness Measures for Learning to Rank
4:20-5:20: Opinion session

5:20-5:30: Wrap up

 

Accepted Papers

1. Efficient and Accurate Local Learning for Ranking
Somnath Banerjee (HP), Avinava Dubey (IIT Bombay), Jinesh Machchhar* (IIT Bombay), Soumen Chakrabarti (IIT Bombay)
2. Learning to Rank QA Data
Suzan Verberne* (CLST, RU Nijmegen), Hans Van Halteren (CLST, RU Nijmegen), Daphne Theijssen (Dept. of Linguistics, RU Nijmegen), Stephan Raaijmakers (TNO, Delft), Lou Boves (CLST, RU Nijmegen)
3. Ranking Experts with Discriminative Probabilistic Models
Yi Fang (Purdue University), Luo Si (Purdue University), Aditya Mathur (Purdue University)
4. Is learning to rank effective for Web search
Min Zhang (Tsinghua University), Da Kuang (Tsinghua University), Guichun Hua (Tsinghua University), Yiqun Liu (Tsinghua University), Shaoping Ma (Tsinghua University)
5. Priors in Web Search
Michael Bendersky (CIIR), Kenneth Church (Johns Hopkins University)
6. Learning to rank with low rank
Bing Bai* (NEC Labs America), Jason Weston (NEC Labs America), David Grangier (NEC Labs America), Ronan Collobert (NEC Labs America), Yanjun Qi (NEC Labs America), Kunihiko Sadamasa (NEC Labs America), Olivier Chapelle (Yahoo Research), Kilian Weinberger (Yahoo! Research)
7. On the Choice of Effectiveness Measures for Learning to Rank
Emine Yilmaz (Microsoft), Stephen Robertson (Microsoft Research Cambridge)

 

Papers should be submitted electronically via the submission site (https://cmt.research.microsoft.com/LR4IR2009/). Submitted papers should be in the ACM Conference style, see the ACM template page, and may not exceed 8 pages. All submissions will be reviewed by at least three members of the program committee. The review is double-blind; authors should conceal their identity where it is practical to do so. All accepted papers will be published in the proceedings of the workshop. The proceedings will be printed and made available at the workshop. At least one author should register and attend the workshop.

Organizers / Co-chairs

Hang Li, Microsoft Research Asia

Tie-Yan Liu, Microsoft Research Asia

ChengXiang Zhai, Univ. of Illinois at Urbana-Champaign

Program Committee

Olivier Chapelle, Yahoo Research
Hsin-Hsi Chen, National University of Taiwan
Ralf Herbrich, Microsoft Research Cambridge
Rong Jin, Michigan State University
Sathiya Keerthi, Yahoo Research
Ravi Kumar, Yahoo Research
Guy Lebanon, Prudue University
Donald Metzler, Yahoo! Research
Einat Minkov, Carnegie Mellon University
Quoc Le, Stanford University
Filip Radlinski, Microsoft Research Cambridge
Michael Taylor, Microsoft Research Cambridge
Kai Yu, NEC Research Institute
Hongyuan Zha, Georgia Tech
Zhaohui Zheng, Yahoo Research
John Guiver, Microsoft Research Cambridge
Guirong Xue, Shanghai Jiao-Tong University
Alekh Agarwal, University of California at Berkeley
Soumen Chakrabarti, IIT Bombay
Ping Li, Cornell University
Irina Matveeva, University of Chicago
Yisong Yue, Cornell university
Jun Xu, Microsoft Research Asia
Tao Qin, Microsoft Research Asia
 

Important Dates

  • Paper Submission Due: June 7  (23:59, Hawaii time)
  • Author Notification Date: June 24
  • Workshop: July 23

Contact Us

Tie-Yan Liu, Microsoft Research Asia

tyliu [at] microsoft [dot] com

 

back to top