Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
High-Quality Video View Interpolation Using a Layered Representation

C. Zitnick, S.B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski


The ability to interactively control viewpointwhilewatching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we showhowhigh-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.

In our approach,we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.


Publication typeInproceedings
Published inACM SIGGRAPH
PublisherAssociation for Computing Machinery, Inc.
> Publications > High-Quality Video View Interpolation Using a Layered Representation