Yasuyuki Matsushita, Sing Bing Kang, Stephen Lin, Heung-Yeung Shum, and Xin Tong
Densely-sampled image representations such as the light field or Lumigraph have been effective in enabling photorealistic image synthesis. Unfortunately, lighting interpolation with such representations has not been shown to be possible without the use of accurate 3D geometry and surface reflectance properties. In this paper, we propose an approach to image-based lighting interpolation that is based on estimates of geometry and shading from relatively few images. We decompose light fields captured at different lighting conditions into intrinsic images (reflectance and illumination images), and estimate view-dependent scene geometries using multi-view stereo. We call the resulting representation an Intrinsic Lumigraph. In the same way that the Lumigraph uses geometry to permit more accurate view interpolation, the Intrinsic Lumigraph uses both geometry and intrinsic images to allow high-quality interpolation at different views and lighting conditions. The joint use of geometry and intrinsic images is effective in computing shadow masks for shadow prediction at new lighting conditions.We illustrate our approach with images of real scenes.
Publisher World Scientific Publishing
Copyright © World Scientific Publishing