Technical Report TR-1-99, Dept. of Computer Science, Harvard University, March 1999.
Interpolation among a sparse set of precomputed object silhouettes.
Recent image-based rendering techniques have shown success in approximating detailed models using sampled
images over coarser meshes. One limitation of these techniques is that the coarseness of the geometric
mesh is apparent in the rough polygonal silhouette of the rendering. In this paper, we present a scheme
for accurately capturing the external silhouette of a model in order to clip the approximate geometry.
Given a detailed model, silhouettes sampled from a discrete set of viewpoints about the object are collected into a silhouette map. The silhouette from an arbitrary viewpoint is then computed as the interpolation from three nearby viewpoints in the silhouette map. Pairwise silhouette interpolation is based on a visual hull approximation in the epipolar plane. The silhouette map itself is adaptively simplified by removing views whose silhouettes are accurately predicted by interpolation of their neighbors. The model geometry is approximated by a progressive hull construction, and is rendered using projected texture maps. The 3D rendering is clipped to the interpolated silhouette using stencil planes.
Hindsight: Our more recent Silhouette Clipping (2000) paper extracts silhouettes from a high-resolution mesh instead of interpolating from a set of precomputed silhouettes. Still, we have hope that the silhouette map may prove useful for using scanned silhouettes directly without having to construct explicit mesh geometry. Another related work is Image-based visual hulls.