Jingjing Liu and xshua
This paper is concerned with the problem of improving the performance of text search baseline in video retrieval, specifically for the search tasks in TRECVID. Given a query in plain text, we first implement syntactic segmentation and semantic expansion of the query, then identify the underlying "targeted objects" which should appear in the retrieved video shots, and scale up the weights of the video shots retrieved by the query terms that represent these targeted objects. We name the approaches as "object-sensitive query analysis" for video search. Specifically, we propose a set of methods to identify the specific terms representing the "targeted objects" in a video search query, and a modified object-centric BM25 algorithm to emphasize the impact of these specific object-terms. In practice, we place the process of object-sensitive query analysis before the text search stage, and verify the effectiveness of the proposed approaches with the TRECVID 2005 and 2006 datasets. The experimental results indicate that the proposed object-sensitive approaches to query analysis bring significant improvement upon the raw text search baseline of video search.