Re-rendering from a Sparse Set of Images

Established: June 1, 2001

We present a framework for view-dependent rendering from arbitrary viewpoints and relighting under novel illumination conditions of a real object from a sparse set of images and a pre-acquired geometric model of the object. Using a 3D model and a small set of images of an object, we recover all the necessary photometric information for subsequent rendering. We recover the illumination distribution, represented as a hemisphere covering the object, as well as the parameters of the simplified Torrance-Sparrow reflection model. This problem is formulated as a 2D blind deconvolution on the surface of the hemisphere, and is solved by alternatively fixing one variable of the objective function and solving a non-blind deconvolution problem. Unlike previous inverse rendering approaches, we require less input images and we recover all three unknowns, namely diffuse texture, specular reflection and lighting, from the observation of real objects, thereby increasing the flexibility of the system and achieving a very compact representation of real world objects for photorealistic rendering.

Images

re-rendering_from1Inputs

 

 

re-rendering_from2

Estimated Lighting

 

 

re-rendering_from3View-dependent Rendering

 

 

 

 

re-rendering_from4

Relighting