Re-rendering from a Sparse Set of Images

We present a framework for view-dependent rendering from arbitrary viewpoints and relighting under novel illumination conditions of a real object from a sparse set of images and a pre-acquired geometric model of the object. Using a 3D model and a small set of images of an object, we recover all the necessary photometric information for subsequent rendering. We recover the illumination distribution, represented as a hemisphere covering the object, as well as the parameters of the simplified Torrance-Sparrow reflection model. This problem is formulated as a 2D blind deconvolution on the surface of the hemisphere, and is solved by alternatively fixing one variable of the objective function and solving a non-blind deconvolution problem. Unlike previous inverse rendering approaches, we require less input images and we recover all three unknowns, namely diffuse texture, specular reflection and lighting, from the observation of real objects, thereby increasing the flexibility of the system and achieving a very compact representation of real world objects for photorealistic rendering.

People involved are: Zhengyou Zhang, Ko Nishino, Katsu Ikeuchi

Paper: K. Nishino, Z. Zhang and K. Ikeuchi, "Determining Reflectance
Parameters and Illumination Distribution from a Sparse Set of Images for
View-dependent Image Synthesis", in *Proc. of Eighth IEEE International
Conference on Computer Vision* (ICCV '01), Vol. 1, pp. 599-606, Jul., 2001.
PDF File.

Report: K. Nishino, K. Ikeuchi, and Z. Zhang, *Re-rendering from a Sparse Set of
Images*, Technical Report DU-CS-05-12, Drexel University, Philadelphia, PA,
USA,
November 2005.

Left:inputs

Right:estimated lighting

**
**

Left:view-dependent rendering

Right:relighting