Space-Time Video Montage

  • Hong-Wen Kang ,
  • Yasuyuki Matsushita ,
  • Xiaoou Tang ,
  • Xue-Quan Chen

Published by Association for Computing Machinery, Inc.

Publication

Conventional video summarization methods focus pre- dominantly on summarizing videos along the time axis, such as building a movie trailer. The resulting video trailer tends to retain much empty space in the background of the video frames while discarding much informative video content due to size limit. In this paper, we propose a novel space- time video summarization method which we call space-time video montage. The method simultaneously analyzes both the spatial and temporal information distribution in a video sequence, and extracts the visually informative space-time portions of the input videos. The informative video portions are represented in volumetric layers. The layers are then packed together in a small output video volume such that the total amount of visual information in the video volume is maximized. To achieve the packing process, we develop a new algorithm based upon the first-fit and Graph cut op- timization techniques. Since our method is able to cut off spatially and temporally less informative portions, it is able to generate much more compact yet highly informative out- put videos. The effectiveness of our method is validated by extensive experiments over a wide variety of videos.