VSL is a nonexistent group of researchers led by Tetsuya Sakai. The members of this lab are usually MSRA interns or visiting researchers. In three words, VSL's research goal is "easy information access." In equations, it can be expressed as:
VSL = ANINA + Evaluate
ANticipate: understand the user's intent with or without the initial query; know when to give to the user as well as what;
Integrate: gather and reorganise useful information from multiple sources, languages and media; present the information concisely and intuitively;
NAvigate: help the user clarify or change his information needs through minimal interactions.
Below, some specific subgoals are briefly discussed. The list is not exhaustive.
One Click Access
Given a query input by the user, try to quickly satisfy the user with the system's very first response. This is like question answering, but the query may deploy multiple subqueries, and the system must aim to present important pieces of information first, and to minimise the amount of text the user has to read. We are currently running the textual One Click Access task at NTCIR, but in the future we would also like to consider more visual system responses, e.g. highlighted text, photos and maps.
Zero Click Access
One Click Access systems wait for the user to provide a query and click on the search button. In contrast, Zero Click Access systems start providing information without waiting for the search button click. Instead, the user's actions, environments and history are used as triggers. And of course the system must not annoy the user. This is very challenging, but is an important research area especially in the mobile domain. Mining mobile queries and sensor data would be a place to start.
Exploratory Search and Interactive Presentation
One Click Access systems are mainly for clear information needs. However, some information needs are vague; some drift; some are difficult to express in the form of a single query. In these situations, interactions between the user and the system are required. But as in One Click Access, we'd like to keep the interactions minimal and simple, and to give what the user wants as quickly as possible. Systems can help the user express his information need in various ways: e.g. by means of advanced forms of query suggestion or anticipation. Also, systems should select the best way to present the desired information to the user: a flat ranked list is seldom optimal. Thus research in search and presentation interfaces will be tackled.
Multilingual Information Access
While a lot of information on the Web may be redundant across languages, the Web sources in different languages also often complement one another. For some queries, a Chinese Web page may contain the most valuable information, while for others, gathering different pieces of information from different language sources may be important. Also, we'd like to help users with different native languages. For these reasons, we will pursue not only language-independent information access technologies, but also language-dependent ones.
Information Access Evaluation
All of the above subgoals go beyond traditional information retrieval evaluation which is to do with a ranked list of documents. Accordingly, appropriate ways to evaluate each of the tasks will be required. Moreover, when several evaluation methods exist for the same problem, some principled ways to evaluate and compare them will be required. Developing a new functionality and evaluating it are two sides of the same coin. The bottom line is to ensure that the research will help achieve easier information access, and ultimately, user satisfaction.
If you are interested in joining the VSL, please send a CV to tesakai at
microsoft dot com.
VSL Members with Selected Publications
Ke Zhou (University of Glasgow, 2013) #12
Haitao Yu (Tokushima University, 2013) #11
Taiki Miyanishi (Kobe University, 2012) #10
- Miyanishi, T., Sakai, T.: Time-aware Structured Query Suggestion, ACM SIGIR 2013, to appear, July 2013.
Kazuya Narita (Tohoku University, 2012) #9
- Narita, K., Sakai, T., Dou, Z. and Song, Y.-I.: MSRA at NTCIR-10 1CLICK-2, NTCIR-10, to appear, June 2013.
Kosetsu Tsukuda (Kyoto University, 2012) #8
- Tsukuda, K., Dou, Z. and Sakai, T.: Microsoft Research Asia at the NTCIR-10 Intent Task, NTCIR-10, to appear, June 2013.
Mayu Iwata (Osaka University, 2011-2012) #7
- Iwata, M., Sakai, T., Yamamoto, T., Chen, Y., Liu, Y., Wen, J.-R. and Nishio, S.: AspecTiles: Tile-based Visualization of Diversified Web Search Results, ACM SIGIR 2012, August 2012.
Takehiro Yamamoto (Kyoto University, 2011-2012) #6
- Yamamoto, T., Sakai, T., Iwata, M., Chen, Y., Wen, J.-R. and Tanaka, K.: The Wisdom of Advertisers: Mining Subgoals via Query Clustering, ACM CIKM 2012, October 2012.
Takuya Akiba (University of Tokyo, 2011) #5
- Akiba, T. and Sakai, T.: Japanese Hyponymy Extraction based on a Term Similarity Graph, IPSJ SIG Technical Report, November 2011.
Naoki Orii (University of Tokyo, 2011) #4
- Orii, N., Song, Y.-I. and Sakai, T.: Microsoft Research Asia at the NTCIR-9 1CLICK Task, NTCIR-9 Proceedings, December 2011.
Hajime Morita (Tokyo Institute of Technology, 2010-2011) #3
- Morita, H., Sakai, T. and Okumura, M.: Query Snowball: A Co-occurrence-based Approach to Multi-document Summarization for Question Answering, IPSJ TOD, 2012.
- Morita, H., Sakai, T. and Okumura, M.: Query Snowball: A Co-occurrence-based Approach to Multi-Document Summarization for Question Answering, ACL-HLT 2001, June 2011.
Naoyoshi Aikawa (Waseda University, 2010) #2
- Aikawa, N., Sakai, T. and Yamana, H.: Community QA Question Classification: Is the Asker Looking for Subjective Answers or Not?, IPSJ Transactions on Databases, TOD50, June 2011.
- Aikawa, N., Sakai, T. and Yamana, H.: Community QA Question Classification: Is the Asker Looking for Subjective Answers or Not?, WebDB Forum 2010, November 2010. WebDB Forum 2010 Excellent Paper Award.
Makoto Kato (Kyoto University, 2010-2011) #1
- Kato, M.P., Sakai, T., Yamamoto, T., Iwata, M.: Report from the NTCIR-10 1CLICK-2 Japanese Subtask: Baselines, Upperbounds and Evaluation Robustness, ACM SIGIR 2013, to appear, July 2013.
- Kato, M.P, Sakai, T. and Tanaka. K.: When Do People Use Query Suggestion?, Information Retrieval, to appear, 2013.
- Kato, M.P. and Sakai, T. and Tanaka, K.: Structured Query Suggestion for Specialization and Parallel Movement: Effect on Search Behaviors, WWW 2012, April 2012.
- Kato, M.P., Sakai, T. and Tanaka, K.: Query Session Data vs. Clickthrough Data as Query Suggestion Resources, ECIR 2011 Workshop on Session Information Retrieval, April 2011.
Tetsuya Sakai (Lab Leader, 2009-) #0
- Sakai, T. and Song, R.: Diversified Search Evaluation: Lessons from the NTCIR-9 INTENT Task, Information Retrieval, 2013.
- Sakai, T., Dou, Z., Clarke. C.L.A.: The Impact of Intent Selection on Diversified Search Evaluation, ACM SIGIR 2013, to appear, July 2013.
- Sakai, T., Dou, Z., Yamamoto, T., Liu, Y., Zhang, M., Kato, M.P., Song, R., Iwata, M.: Summary of the NTCIR-10 INTENT-2 Task: Subtopic Mining and Search Result Diversification, ACM SIGIR 2013, to appear, July 2013.
- Sakai, T., Dou, Z.: Summaries, Ranked Retrieval and Sessions: A Unified Framework for Information Access Evaluation, ACM SIGIR 2013, to appear, July 2013. U-measure site
Sakai, T. and Kato, M.P: One Click One Revisited: Enhancing Evaluation based on Information Units, AIRS 2012, Lecture Notes in Computer Science 7675, pp.39-51, December 2012.
- Sakai, T., Dou, Z., Song, R. and Kando, N.: The Reusability of a Diversified Search Test Collection, AIRS 2012, Lecture Notes in Computer Science 7675, pp.26-38, December 2012. Best Paper Award.
- Sakai, T.: Evaluation with Informational and Navigational Intents, WWW 2012, April 2012.
- Sakai, T., Kato, M. and Song, Y.-I.: Click the Search Button and Be Happy: Evaluating Direct and Immediate Information Access, ACM CIKM 2011, October 2011.
- Sakai, T. and Song, R.: Evaluating Diversified Search Results Using Per-Intent Graded Relevance, ACM SIGIR 2011, July 2011.
- Sakai, T., Ishikawa, D., Kando, N., Seki, Y., Kuriyama, K. and Lin, C.-Y.: Using Graded-Relevance Metrics for Evaluating Community QA Answer Selection, ACM WSDM 2011 oral presentation paper, February 2011.