Rohan Ramanath, Monojit Choudhury, Kalika Bali, and Rishiaj Saha Roy
Query segmentation, like text chunking, is the first step towards query understanding. In this study we explore the effectiveness of crowdsourcing for this task. Through carefully designed control experiments and Inter Annotator Agreement metrics for analysis of experimental data, we show that crowdsourcing may not be a suitable approach for query segmentation because the crowd seems to have a very strong bias towards dividing the query into roughly equal (often only two) parts. Similarly, in the case of hierarchical or nested segmentation, turkers have a strong preference towards balanced binary trees.
|Published in||Proceedings of ACL|
|Publisher||Association for Computational Linguistics|