Relevance assessment: are judges exchangeable and does it matter

  • Peter Bailey ,
  • ,
  • Ian Soboroff ,
  • Paul Thomas ,
  • Arjen P. de Vries ,
  • Emine Yilmaz

SIGIR '08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval |

Published by ACM

Publication

We investigate to what extent people making relevance judgements for a reusable IR test collection are exchangeable. We consider three classes of judge: “gold standard” judges, who are topic originators and are experts in a particular information seeking task; “silver standard” judges, who are task experts but did not create topics; and “bronze standard” judges, who are those who did not define topics and are not experts in the task.

Analysis shows low levels of agreement in relevance judgements between these three groups. We report on experiments to determine if this is sufficient to invalidate the use of a test collection for measuring system performance when relevance assessments have been created by silver standard or bronze standard judges. We find that both system scores and system rankings are subject to consistent but small differences across the three assessment sets. It appears that test collections are not completely robust to changes of judge when these judges vary widely in task and topic expertise. Bronze standard judges may not be able to substitute for topic and task experts, due to changes in the relative performance of assessed systems, and gold standard judges are preferred.