Towards methods for the collective gathering and quality control of relevance assessments

Gabriella Kazai, Natasa Milic-Frayling, and Jamie Costello


Growing interest in online collections of digital books and video content motivates the development and optimization of adequate retrieval systems. However, traditional methods for collecting relevance assessments to tune system performance are challenged by the nature of digital items in such collections, where assessors are faced with a considerable effort to review and assess content by extensive reading, browsing, and within-document searching. The extra strain is caused by the length and cohesion of the digital item and the dispersion of topics within it. We propose a method for the collective gathering of relevance assessments using a social game model to instigate participants' engagement. The game provides incentives for assessors to follow a predefined review procedure and makes provisions for the quality control of the collected relevance judgments. We discuss the approach in detail, and present the results of a pilot study conducted on a book corpus to validate the approach. Our analysis reveals intricate relationships between the affordances of the system, the incentives of the social game, and the behavior of the assessors. We show that the proposed game design achieves two designated goals: the incentive structure motivates endurance in assessors and the review process encourages truthful assessment.


Publication typeInproceedings
Published inProceedings of the 32nd international ACM SIGIR Conference on Research and Development in Information Retrieval (Boston, MA, USA, July 19 - 23, 2009). SIGIR '09
PublisherAssociation for Computing Machinery, Inc.
> Publications > Towards methods for the collective gathering and quality control of relevance assessments