Shujie Liu, Philip A. Chou, Cha Zhang, Zhengyou Zhang, and Chang Wen Chen
The most significant problem in generating virtual views from a limited number of video camera views is handling areas that have become dis-occluded by shifting the virtual view away from the camera view. We propose using temporal information to address this problem, based on the notion that dis-occluded areas may have been seen by some camera in some previous frames. We formulate the problem as one of estimating the underlying state of the object in a stochastic dynamical system, given a sequence of observations. We apply the formulation to improving the visual quality of virtual views generated from a single “color plus depth” camera, and show that our algorithm achieves better results than depth image based rendering using standard inpainting.
|Published in||Int'l Conference on Multimedia and Expo (ICME)|
© 2012 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.