Obtaining photo-realistic geometric and photometric models is an important component of image-based rendering systems that use real-world imagery as their input. Applications of such systems include novel view generation and the mixing of live imagery with synthetic computer graphics. This paper reviews a number of image-based representations (and their associated reconstruction algorithms) we have developed in the last few years. It begins by reviewing some recent approaches to the classic problem of recovering a depth map from two or more images. It then describes some of our newer representations and reconstruction algorithms, including volumetric representations, layered plane-plus-parallax representations (including the recovery of transparent and reflected layers), and multiple depth maps. The paper also presents our work in video-based rendering, in which we synthesize novel video from short sample clips by discovering their (quasi-repetive) temporal structure.
|Published in||Fourth International Workshop on Cooperative and Distributed Vision|