Abstract: We exploit the falloff of acuity in the visual periphery to accelerate
graphics computation by a factor of 5-6 on a desktop HD display
(1920x1080). Our method tracks the user's gaze point and renders
three image layers around it at progressively higher angular size but
lower sampling rate. The three layers are then magnified to display
resolution and smoothly composited. We develop a general and efficient antialiasing algorithm easily retrofitted into existing graphics
code to minimize "twinkling" artifacts in the lower-resolution layers.
A standard psychophysical model for acuity falloff assumes
that minimum detectable angular size increases linearly as a function
of eccentricity. Given the slope characterizing this falloff, we
automatically compute layer sizes and sampling rates. The result
looks like a full-resolution image but reduces the number of pixels
shaded by a factor of 10-15.
We performed a user study to validate these results. It identifies
two levels of foveation quality: a more conservative one in which
users reported foveated rendering quality as equivalent to or better
than non-foveated when directly shown both, and a more aggressive
one in which users were unable to correctly label as increasing
or decreasing a short quality progression relative to a high-quality
foveated reference. Based on this user study, we obtain a slope
value for the model of 1.32-1.65 arc minutes per degree of eccentricity.
This allows us to predict two future advantages of foveated
rendering: (1) bigger savings with larger, sharper displays than exist
currently (e.g. 100 times speedup at a field of view of 70° and
resolution matching foveal acuity), and (2) a roughly linear (rather
than quadratic or worse) increase in rendering cost with increasing
display field of view, for planar displays at a constant sharpness.

Keywords: forward and inverse kinematics, MCAD, mechanism
synthesis, simulated annealing.

Abstract: We introduce a new method to synthesize mechanical toys solely
from the motion of their features. The designer specifies the geometry
and a time-varying rotation and translation of each rigid feature
component. Our algorithm automatically generates a mechanism
assembly located in a box below the feature base that produces the
specified motion. Parts in the assembly are selected from a parameterized
set including belt-pulleys, gears, crank-sliders, quickreturns,
and various cams (snail, ellipse, and double-ellipse). Positions
and parameters for these parts are optimized to generate the
specified motion, minimize a simple measure of complexity, and
yield a well-distributed layout of parts over the driving axes. Our
solution uses a special initialization procedure followed by simulated
annealing to efficiently search the complex configuration space
for an optimal assembly.

Abstract: Recent work defines vector graphics using diffusion between colored
curves. We explore higher-order fairing to enable more natural
interpolation and greater expressive control. Specifically, we
build on thin-plate splines which provide smoothness everywhere
except at user-specified tears and creases (discontinuities in value
and derivative respectively). Our system lets a user sketch discontinuity
curves without fixing their colors, and sprinkle color constraints
at sparse interior points to obtain smooth interpolation subject
to the outlines. We refine the representation with novel contour
and slope curves, which anisotropically constrain interpolation
derivatives. Compound curves further increase editing power by expanding
a single curve into multiple offsets of various basic types
(value, tear, crease, slope, and contour). The vector constraints are
discretized over an image grid, and satisfied using a hierarchical
solver. We demonstrate interactive authoring on a desktop CPU.

Keywords: BRDF chart, dynamic time warping (DTW), local linear embedding, reflectance sequence/response, spatially varying BRDF (SVBRDF).

Abstract: We present a simple, fast solution for reflectance acquisition using
tools that fit into a pocket. Our method captures video of a flat target
surface from a fixed video camera lit by a hand-held, moving, linear
light source. After processing, we obtain an SVBRDF.
We introduce a BRDF chart, analogous to a color "checker" chart,
which arranges a set of known-BRDF reference tiles over a small
card. A sequence of light responses from the chart tiles as well as
from points on the target is captured and matched to reconstruct the
target's appearance.
We develop a new algorithm for BRDF reconstruction which works
directly on these LDR responses, without knowing the light or camera
position, or acquiring HDR lighting. It compensates for spatial
variation caused by the local (finite distance) camera and light position
by warping responses over time to align them to a specular
reference. After alignment, we find an optimal linear combination
of the Lambertian and purely specular reference responses to match
each target point's response. The same weights are then applied to
the corresponding (known) reference BRDFs to reconstruct the target
point's BRDF. We extend the basic algorithm to also recover
varying surface normals by adding two spherical caps for diffuse
and specular references to the BRDF chart.
We demonstrate convincing results obtained after less than 30 seconds
of data capture, using commercial mobile phone cameras in a
casual environment.

Keywords: BRDF, local linear embedding (LLE), microfacet model, normal distribution function (NDF), reflectance acquisition.

Abstract: Manifold bootstrapping is a new method for data-driven modeling
of real-world, spatially-varying reflectance, based on the idea that
reflectance over a given material sample forms a low-dimensional
manifold. It provides a high-resolution result in both the spatial
and angular domains by decomposing reflectance measurement into
two lower-dimensional phases. The first acquires representatives
of high angular dimension but sampled sparsely over the surface,
while the second acquires keys of low angular dimension but sampled
densely over the surface.
We develop a hand-held, high-speed BRDF capturing device for
phase one measurements. A condenser-based optical setup collects
a dense hemisphere of rays emanating from a single point on the
target sample as it is manually scanned over it, yielding 10 BRDF
point measurements per second. Lighting directions from 6 LEDs
are applied at each measurement; these are amplified to a full 4D
BRDF using the general (NDF-tabulated) microfacet model. The
second phase captures N=20-200 images of the entire sample from
a fixed view and lit by a varying area source. We show that the resulting
N-dimensional keys capture much of the distance information
in the original BRDF space, so that they effectively discriminate
among representatives, though they lack sufficient angular detail
to reconstruct the SVBRDF by themselves. At each surface
position, a local linear combination of a small number of neighboring
representatives is computed to match each key, yielding a high-resolution
SVBRDF. A quick capture session (10-20 minutes) on
simple devices yields results showing sharp and anisotropic specularity
and rich spatial detail.

Keywords: acoustic propagation, convolution, early reflections, impulse response, late reverberation, peak detection.

Abstract: We present a method for real-time sound propagation that captures
all wave effects, including diffraction and reverberation, for multiple
moving sources and a moving listener in a complex, static 3D
scene. It performs an offline numerical simulation over the scene
and then applies a novel technique to extract and compactly encode
the perceptually salient information in the resulting acoustic
responses. Each response is automatically broken into two phases:
early reflections (ER) and late reverberation (LR), via a threshold
on the temporal density of arriving wavefronts. The LR is
simulated and stored in the frequency domain, once per room in
the scene. The ER accounts for more detailed spatial variation,
by recording a set of peak delays/amplitudes in the time domain
and a residual frequency response sampled in octave frequency
bands, at each source/receiver point pair in a 5D grid. An efficient
run-time uses this precomputed representation to perform binaural
sound rendering based on frequency-domain convolution. Our
system demonstrates realistic, wave-based acoustic effects in real
time, including diffraction low-passing behind obstructions, sound
focusing, hollow reverberation in empty rooms, sound diffusion in
fully-furnished rooms, and realistic late reverberation.

Keywords: environmental lighting, microfacet model, normal
distribution function (NDF), shadows, spatially-varying bi-directional
reflectance distribution function (SVBRDF), spherical Gaussian,
spherical signed distance function (SSDF), texture map.

Abstract: We describe a technique for real-time rendering of dynamic,
spatially-varying BRDFs in static scenes with all-frequency shadows
from environmental and point lights. The 6D SVBRDF is represented
with a general microfacet model and spherical lobes fit to
its 4D spatially-varying normal distribution function (SVNDF). A
sum of spherical Gaussians (SGs) provides an accurate approximation
with a small number of lobes. Parametric BRDFs are fit on-the-fly
using simple analytic expressions; measured BRDFs are fit
as a preprocess using nonlinear optimization. Our BRDF representation
is compact, allows detailed textures, is closed under products
and rotations, and supports reflectance of arbitrarily high specularity.
At run-time, SGs representing the NDF are warped to align the
half-angle vector to the lighting direction and multiplied by the microfacet
shadowing and Fresnel factors. This yields the relevant 2D
view slice on-the-fly at each pixel, still represented in the SG basis.
We account for macro-scale shadowing using a new, nonlinear visibility
representation based on spherical signed distance functions
(SSDFs). SSDFs allow per-pixel interpolation of high-frequency
visibility without ghosting and can be multiplied by the BRDF and
lighting efficiently on the GPU.

Abstract: We present a real-time method for rendering global illumination effects from large area and environmental lights
on dynamic height fields. In contrast to previous work, our method handles inter-reflections (indirect lighting)
and non-diffuse surfaces. To reduce sampling, we construct one multi-resolution pyramid for height variation
to compute direct shadows, and another pyramid for each indirect bounce of incident radiance to compute interreflections.
The basic principle is to sample the points blocking direct light, or shedding indirect light, from coarser
levels of the pyramid the farther away they are from a given receiver point. We unify the representation of visibility
and indirect radiance at discrete azimuthal directions (i.e., as a function of a single elevation angle) using the
concept of a "casting set" of visible points along this direction whose contributions are collected in the basis
of normalized Legendre polynomials. This analytic representation is compact, requires no precomputation, and
allows efficient integration to produce the spherical visibility and indirect radiance signals. Sub-sampling visibility
and indirect radiance, while shading with full-resolution surface normals, further increases performance without
introducing noticeable artifacts. Our method renders 512x512 height fields (>500K triangles) at 36Hz.

Keywords: BRDF acquisition, normal
distribution function (NDF), spatially-varying, bi-directional
reflectance distribution function (SVBRDF).

Abstract: We present a new technique for the visual modeling of spatially-varying
anisotropic reflectance using data captured from a single
view. Reflectance is represented using a microfacet-based BRDF
which tabulates the facets’ normal distribution (NDF) as a function
of surface location. Data from a single view provides a 2D slice
of the 4D BRDF at each surface point from which we fit a partial
NDF. The fitted NDF is partial because the single view direction
coupled with the set of light directions covers only a portion of the
“half-angle” hemisphere. We complete the NDF at each point by
applying a novel variant of texture synthesis using similar, overlapping
partial NDFs from other points. Our similarity measure allows
azimuthal rotation of partial NDFs, under the assumption that
reflectance is spatially redundant but the local frame may be arbitrarily
oriented. Our system includes a simple acquisition device
that collects images over a 2D set of light directions by scanning a
linear array of LEDs over a flat sample. Results demonstrate that
our approach preserves spatial and directional BRDF details and
generates a visually compelling match to measured materials.

Abstract: We present a new, real-time method for rendering soft shadows
from large light sources or lighting environments on dynamic height fields.
The method first computes a horizon map for a set of azimuthal directions.
To reduce sampling, we compute a multi-resolution pyramid
on the height field.
Coarser pyramid levels are indexed
as the distance from caster to receiver increases.
For every receiver point and every azimuthal direction,
a smooth function of blocking angle in terms of log distance is reconstructed
from a height difference sample at each pyramid level.
This function's maximum approximates the horizon angle.
We then sum visibility at each receiver point
over wedges determined by successive pairs of horizon angles.
Each wedge represents a linear transition in blocking angle
over its azimuthal extent.
It is precomputed in the order-4 spherical harmonic (SH) basis,
for a canonical azimuthal origin and fixed extent,
resulting in a 2D table.
The SH triple product of 16D vectors representing lighting, total visibility, and diffuse reflectance
then yields the soft-shadowed result.
Two types of light sources are considered; both are distant and low-frequency.
Environmental lights require visibility
sampling around the complete 360 degree azimuth, while key lights
sample visibility within a partial swath.
Restricting the swath concentrates samples where the light comes from
(e.g. 3 azimuthal directions vs. 16-32 for a full swath)
and obtains sharper shadows.
Our GPU implementation handles
height fields up to 1024x1024 in real-time.
The computation is simple, local, and parallel,
with performance independent of geometric content.

Abstract: We present a new, general, and real-time technique
for soft global illumination in low-frequency environmental
lighting. It accumulates over relatively few spherical proxies
which approximate the light blocking and re-radiating
effect of dynamic geometry. Soft shadows are computed by
accumulating log visibility vectors for each sphere proxy as
seen by each receiver point. Inter-reflections are computed
by accumulating vectors representing the proxy’s unshadowed
radiance when illuminated by the environment. Both
vectors capture low-frequency directional dependence using
the spherical harmonic basis. We also present a new
proxy accumulation strategy that splats each proxy to receiver
pixels in image space to collect its shadowing and indirect
lighting contribution. Our soft GI rendering pipeline
unifies direct and indirect soft effects with a simple accumulation
strategy that maps entirely to the GPU and outperforms
previous vertex-based methods.

Keywords: airlight, fog, Gaussian, participating media, radial
basis function (RBF).

Abstract: We describe a new, analytic approximation to the airlight integral
from scattering media whose density is modeled as a sum of Gaussians.
The approximation supports real-time rendering of inhomogeneous
media including their shadowing and scattering effects.
For each Gaussian, this approximation samples the scattering integrand
at the projection of its center along the view ray but models
attenuation and shadowing with respect to the other Gaussians by
integrating density along the fixed path from light source to 3D center
to view point. Our approach handles isotropic, single-scattering
media illuminated by point light sources or low-frequency environmental
lighting. We also generalize models for reflectance of surfaces
from constant-density to inhomogeneous media, using simple
optical depth averaging in the direction of the light source or
all around the receiver point. Our real-time rendering approach is
incorporated into a system for real-time design and preview of realistic,
animated fog, steam, or smoke.

Note: see more recent work in Pacific Graphics 2008:
"Gradient-based Interpolation and Sampling for Real-Time Rendering of Inhomogeneous, Single-Scattering Media", by my colleagues
Zhong Ren, Kun Zhou, Stephen Lin, and Baining Guo. This work is also able to simulate light shafts.

Abstract: We approximate a solid object represented as a triangle mesh by a
bounding set of spheres having minimal summed volume outside the object. We show
how outside volume for a single sphere can be computed using a simple integration over the object’s triangles. We
then minimize the total outside volume over all spheres
in the set using a variant of iterative Lloyd clustering
which splits the mesh points into sets and bounds each
with an outside volume-minimizing sphere. The resulting
sphere sets are tighter than those of previous methods. In
experiments comparing against a state-of-the-art alter-
native (adaptive medial axis), our method often requires
half or fewer as many spheres to obtain the same error,
under a variety of error metrics including total outside
volume, shadowing fidelity, and proximity measurement.

Keywords: ambient occlusion, low-frequency lighting environment,
global illumination, hybrid algorithm (HYB), matrix exponential,
optimal linear approximation (OL), product series, scaling/squaring,
self-shadowing, SHEXP, SH logarithm, SH visibility vector,
Volterra series.

Abstract: Previous methods for soft shadows numerically integrate over many
light directions at each receiver point, testing blocker visibility in
each direction. We introduce a method for real-time soft shadows
in dynamic scenes illuminated by large, low-frequency light
sources where such integration is impractical. Our method operates
on vectors representing low-frequency visibility of blockers in the
spherical harmonic basis. Blocking geometry is modeled as a set of
spheres; relatively few spheres capture the low-frequency blocking
effect of complicated geometry. At each receiver point, we compute
the product of visibility vectors for these blocker spheres as
seen from the point. Instead of computing an expensive SH product
per blocker as in previous work, we perform inexpensive vector
sums to accumulate the log of blocker visibility. SH exponentiation
then yields the product visibility vector over all blockers. We show
how the SH exponentiation required can be approximated accurately
and efficiently for low-order SH, accelerating previous CPU-based
methods by a factor of 10 or more, depending on blocker
complexity, and allowing real-time GPU implementation.

Abstract: We present a method for fast evaluation of spherical harmonic (SH)
products or, more generally, any binary product of vectors
yielding a vector, where the product is governed by a fixed,
sparse, symmetric, order 3 tensor, denoted
G_{ijk}. The method is given the nonzero entries of
G as input (they can be
computed analytically or by numerical integration for the SH
basis) and makes use of an offline code generator to perform the
necessary array indexing using constants rather than variables. Factoring is performed by
collecting the tensor’s nonzero components, represented by index
triples (i,j,k),
into groups (i,j,k_{1}), (i,j,k_{2}),
…,(i,j,k_{Nij}) which
share a common pair of indices (i,j)
in the triple, and which vary only in the third (completion)
index k_{m}
and its corresponding coefficient d_{m}=G_{ijkm}
where mÎ{1,2,...,N_{ij}}. The collection is done
using a greedy method that successively chooses the index pair (i,j)
maximizing the number N_{ij}
of different k_{m}
needed to complete the tensor’s nonzero index triples. The greedy method then continues to the next best initial
pair, generates its contribution, and so on, until all nonzero
triples have been accounted for. The combination of “greedy pair” factoring and generating
constant array indices produces code that is significantly
faster than naïve evaluation methods.

Keywords: bidirectional texture functions (BTFs),
meso-structure distance function (MDF), reflectance
and shading models, rendering, shadow algorithms, texture mapping.

Abstract: Bidirectional texture functions, or BTFs, accurately model
reflectance variation at a fine (meso-) scale as a function of lighting and
viewing direction. BTFs also capture view-dependent visibility variation, also
called masking or parallax, but only within surface contours. Meso-structure
detail is neglected at silhouettes, so BTF-mapped objects retain the coarse
shape of the underlying model.
We augment BTF rendering to obtain approximate meso-scale silhouettes. Our new representation, the 4D meso-structure distance
function (MDF), tabulates the displacement from a reference frame where a ray
first intersects the meso-scale geometry beneath as a function of ray direction
and ray position along that reference plane. Given an MDF, the meso-structure
silhouette can be rendered with a per-pixel depth peeling process on graphics
hardware, while shading and local parallax are handled by the BTF. Our approach
allows real-time rendering, handles complex, non-height-field meso-structure,
requires that no additional geometry be sent to the rasterizer other than the
mesh triangles, is more compact than textured visibility representations used
previously, and, for the first time, can be easily measured from physical
samples. We also adapt the algorithm to capture detailed shadows cast both by
and onto BTF-mapped surfaces. We demonstrate the efficiency of our algorithm on
a variety of BTF data, including real data acquired using our BTF–MDF
measurement system.

Abstract:
Precomputed radiance transfer (PRT) captures realistic lighting
effects from distant, low-frequency environmental lighting but has been limited
to static models or precomputed sequences. We focus on PRT for local effects
such as bumps, wrinkles, or other detailed features, but extend it to
arbitrarily deformable models. Our approach applies zonal harmonics (ZH) which
approximate spherical functions as sums of circularly symmetric Legendre
polynomials around different axes. By spatially varying both the axes and
coefficients of these basis functions, we can fit to spatially varying transfer
signals. Compared to the spherical harmonic (SH) basis, the ZH basis yields a
more compact approximation. More important, it can be trivially rotated whereas
SH rotation is expensive and unsuited for dense per-vertex or per-pixel
evaluation. This property allows, for the first time, PRT to be mapped onto
deforming models which re-orient the local coordinate frame. We generate ZH
transfer models by fitting to PRT signals simulated on meshes or simple
parametric models for thin membranes and wrinkles. We show how shading with ZH
transfer can be significantly accelerated by specializing to a given lighting
environment. Finally, we demonstrate real-time rendering results with soft
shadows, inter-reflections, and subsurface scatter on deforming models.

Note: in hindsight, I wished we had called this "reorientable" PRT rather than "deformable",
since the essence of our representation is to allow fast rotation of the local radiance transfer.

Abstract:
We present a novel technique for large deformations on 3D meshes
using the volumetric graph Laplacian. We first construct a graph representing
the volume inside the input mesh. The graph need not form a solid meshing of the
input mesh’s interior; its edges simply connect nearby points in the volume.
This graph’s Laplacian encodes volumetric details as the difference between each
point in the graph and the average of its neighbors. Preserving these volumetric
details during deformation imposes a volumetric constraint that prevents
unnatural changes in volume. We also include in the graph points a short
distance outside the mesh to avoid local self-intersections. Volumetric detail
preservation is represented by a quadric energy function. Minimizing it
preserves details in a least-squares sense, distributing error uniformly over
the whole deformed mesh. It can also be combined with conventional constraints
involving surface positions, details or smoothness, and efficiently minimized by
solving a sparse linear system.
We apply this technique in a 2D curve-based deformation system allowing novice
users to create pleasing deformations with little effort. A novel application of
this system is to apply nonrigid and exaggerated deformations of 2D cartoon
characters to 3D meshes. We demonstrate our system’s potential with several
examples.

Abstract:
We describe a fully automatic method, called iso-charts, to
create texture atlases on arbitrary meshes. It is the first to consider stretch
not only when parameterizing charts, but also when forming charts. The output
atlas bounds stretch by a user-specified constant, allowing the user to balance
the number of charts against their stretch. Our approach combines two seemingly
incompatible techniques: stretch-minimizing parameterization, based on the
surface integral of the trace of the local metric tensor, and the “isomap” or
MDS (multi-dimensional scaling) parameterization, based on an eigen-analysis of
the matrix of squared geodesic distances between pairs of mesh vertices. We show
that only a few iterations of nonlinear stretch optimization need be applied to
the MDS parameterization to obtain low-stretch atlases. The close relationship
we discover between these two parameterizations also allows us to apply spectral
clustering based on MDS to partition the mesh into charts having low stretch. We
also novelly apply the graph cut algorithm in optimizing chart boundaries to
further minimize stretch, follow sharp features, and avoid meandering. Overall,
our algorithm creates texture atlases quickly, with fewer charts and lower
stretch than previous methods, providing improvement in applications like
geometric remeshing. We also describe an extension, signal-specialized atlas
creation, to efficiently sample surface signals, and show for the first time
that considering signal stretch in chart formation produces better texture maps.

Abstract:
We propose a metric for surface parameterization specialized to
its signal that can be used to create more efficient, high-quality texture maps.
Derived from Taylor expansion of signal error, our metric predicts the signal
approximation error — the difference between the original surface signal and its
reconstruction from the sampled texture. Unlike previous methods, our metric
assumes piecewise-linear reconstruction, and thus makes a good approximation to
bilinear reconstruction employed in graphics hardware. We achieve significant
savings in texture area for a desired signal accuracy compared to the
signal-specialized parameterization metric proposed by Sander et al. in the 2002
Eurographics Workshop on Rendering.

Keywords: BRDF factorization, clustered principal component analysis
(CPCA), global illumination, Haar wavelet lighting basis over cube
maps, PRT, transfer matrix.

Abstract:
We introduce a method based on precomputed radiance transfer
(PRT) that allows interactive rendering of glossy surfaces and includes
shadowing effects from dynamic, “all-frequency” lighting. Specifically, source
lighting is represented by a cube map at resolution nL = 6×32×32. We present a
novel PRT formulation which factors glossy BRDFs into purely view-dependent and
light-dependent parts, achieving reasonable accuracy with only m=10 dimensional
factors. We then tabulate an m×nL transfer matrix at each surface vertex as a
preprocess, representing the object’s response to this lighting. Because this
surface signal is so high-dimensional, reducing m is crucial for making
practical both the preprocessing and run-time. To compress the transfer
matrices, we divide the cube map into 24 lighting segments and apply the Haar
wavelet basis in each segment to provide sensible quantization. We also apply
clustered principal component analysis (CPCA) to each PRT segment to approximate
it as a linear combination of a few (n=16) representative transfer matrices
within a small set of clusters over the surface. This exploits spatial coherence
to compress very effectively. Most important, it maintains fast rendering rates
with 2-3 orders of magnitude more lighting coefficients than previous methods,
which increases accuracy and avoids temporal artifacts in high-frequency
lighting environments. We demonstrate interactive performance (1-5Hz) on models
having up to 50,000 vertices.

Keywords: clustered
principal component analysis (CPCA), graphics hardware, low-frequency environmental lighting,
global illumination, Monte Carlo techniques, principal component
analysis (PCA), PRT, shadows, vector quantization (VQ).

Abstract:
We compress storage and accelerate performance of precomputed
radiance transfer (PRT), which captures the way an object shadows, scatters, and
reflects light. PRT records over many surface points a transfer matrix. At
run-time, this matrix transforms a vector of spherical harmonic coefficients
representing distant, low-frequency source lighting into exiting radiance.
Per-point transfer matrices form a high-dimensional surface signal that we
compress using clustered principal component analysis (CPCA), which partitions
many samples into fewer clusters each approximating the signal as an affine
subspace. CPCA thus reduces the high-dimensional transfer signal to a
low-dimensional set of per-point weights on a per-cluster set of representative
matrices. Rather than computing a weighted sum of representatives and applying
this result to the lighting, we apply the representatives to the lighting
per-cluster (on the CPU) and weight these results per-point (on the GPU). Since
the output of the matrix is lower-dimensional than the matrix itself, this
reduces computation. We also increase the accuracy of encoded radiance functions
with a new least-squares optimal projection of spherical harmonics onto the
hemisphere. We describe an implementation on graphics hardware that performs
real-time rendering of glossy objects with dynamic self-shadowing and
inter-reflection without fixing the view or light as in previous work. Our
approach also allows significantly increased lighting frequency when rendering
diffuse objects and includes subsurface scattering.

Keywords: bidirectional texture function (BTF), graphics hardware,
global illumination, id map, meso-scale and macro-scale, multi-resolution
modeling, low-frequency environmental lighting, precomputed
radiance transfer (PRT), radiance transfer
texture (RTT), shadows, texture synthesis, transfer matrix.

Abstract:
Radiance transfer represents how generic source lighting is
shadowed and scattered by an object to produce view-dependent appearance. We
generalize by rendering transfer at two scales. A macro-scale is coarsely
sampled over an object’s surface, providing global effects like shadows cast
from an arm onto a body. A meso-scale is finely sampled over a small patch to
provide local texture. Low-order (25D) spherical harmonics represent
low-frequency lighting dependence for both scales. To render, a coefficient
vector representing distant source lighting is first transformed at the
macro-scale by a matrix at each vertex of a coarse mesh. The resulting vectors
represent a spatially-varying hemisphere of lighting incident to the meso-scale.
A 4D function, called a radiance transfer texture (RTT), then specifies the
surface’s meso-scale response to each lighting basis component, as a function of
a spatial index and a view direction. Finally, a 25D dot product of the
macro-scale result vector with the vector looked up from the RTT performs the
correct shading integral. We use an id map to place RTT samples from a small
patch over the entire object; only two scalars are specified at high spatial
resolution. Results show that bi-scale decomposition makes preprocessing
practical and efficiently renders self-shadowing and inter-reflection effects
from dynamic, low-frequency light sources at both scales.

Abstract:
We introduce multi-chart geometry images, a new representation
for arbitrary surfaces. It is created by resampling a surface onto a regular 2D
grid. Whereas the original scheme of Gu et al. maps the entire surface onto a
single square, we use an atlas construction to map the surface piecewise onto
charts of arbitrary shape. We demonstrate that this added flexibility reduces
parametrization distortion and thus provides greater geometric fidelity,
particularly for shapes with long extremities, high genus, or disconnected
components. Traditional atlas constructions suffer from discontinuous
reconstruction across chart boundaries, which in our context create unacceptable
surface cracks. Our solution is a novel zippering algorithm that creates a
watertight surface. In addition, we present a new atlas chartification scheme
based on clustering optimization.

Keywords: graphics hardware, global illumination, Monte Carlo
techniques, neighborhood transfer, PRT, self transfer, soft shadows,
transfer matrix, transfer vector.

Abstract:
We present a new, real-time method for rendering diffuse and
glossy objects in low-frequency lighting environments that captures soft
shadows, inter-reflections, and caustics. As a preprocess, a novel global
transport simulator creates functions over the object’s surface representing
transfer of arbitrary, low-frequency incident lighting into transferred radiance
which includes global effects like shadows and inter-reflections from the object
onto itself. At run-time, these transfer functions are applied to actual
incident lighting. Dynamic, local lighting is handled by sampling it close to
the object every frame; the object can also be rigidly rotated with respect to
the lighting and vice versa. Lighting and transfer functions are represented
using low-order spherical harmonics. This avoids aliasing and evaluates
efficiently on graphics hardware by reducing the shading integral to a dot
product of 9 to 25 element vectors for diffuse receivers. Glossy objects are
handled using matrices rather than vectors. We further introduce functions for
radiance transfer from a dynamic lighting environment through a preprocessed
object to neighboring points in space. These allow soft shadows and caustics
from rigidly moving objects to be cast onto arbitrary, dynamic receivers. We
demonstrate real-time global lighting effects with this approach.

Note: exposed in the in the D3DX library
of DirectX 9 (PRT).

Abstract:
To reduce memory requirements for texture mapping a model, we
build a surface parametrization specialized to its signal (such as color or
normal). Intuitively, we want to allocate more texture samples in regions with
greater signal detail. Our approach is to minimize signal approximation error —
the difference between the original surface signal and its reconstruction from
the sampled texture. Specifically, our signal-stretch parametrization metric is
derived from a Taylor expansion of signal error. For fast evaluation, this
metric is pre-integrated over the surface as a metric tensor. We minimize this
nonlinear metric using a novel coarse-to-fine hierarchical solver, further
accelerated with a fine-to-coarse propagation of the integrated metric tensor.
Use of metric tensors permits anisotropic squashing of the parametrization along
directions of low signal gradient. Texture area can often be reduced by a factor
of 4 for a desired signal accuracy compared to nonspecialized parametrizations.

Keywords: global illumination, precomputed radiance transfer
(PRT), rendering on graphics hardware, shadows.

Abstract:
Real-time shading using general (e.g., anisotropic) BRDFs has so
far been limited to a few point or directional light sources. We extend such
shading to smooth, area lighting using a low-order spherical harmonic basis for
the lighting environment. We represent the 4D product function of BRDF times the
cosine factor (dot product of the incident lighting and surface normal vectors)
as a 2D table of spherical harmonic coefficients. Each table entry represents,
for a single view direction, the integral of this product function times
lighting on the hemisphere expressed in spherical harmonics. This reduces the
shading integral to a simple dot product of 25 component vectors, easily
evaluatable on PC graphics hardware. Non-trivial BRDF models require rotating
the lighting coefficients to a local frame at each point on an object, currently
forming the computational bottleneck. Real-time results can be achieved by
fixing the view to allow dynamic lighting or vice versa. We also generalize a
previous method for precomputed radiance transfer to handle general BRDF
shading. This provides shadows and inter-reflections that respond in real-time
to lighting changes on a preprocessed object of arbitrary material (BRDF) type.

Abstract:
Given an arbitrary mesh, we present a method to construct a
progressive mesh (PM) such that all meshes in the PM sequence share a common
texture parametrization. Our method considers two important goals
simultaneously. It minimizes texture stretch (small texture distances mapped
onto large surface distances) to balance sampling rates over all locations and
directions on the surface. It also minimizes texture deviation (“slippage” error
based on parametric correspondence) to obtain accurate textured mesh
approximations. The method begins by partitioning the mesh into charts using
planarity and compactness heuristics. It creates a stretch-minimizing
parametrization within each chart, and resizes the charts based on the resulting
stretch. Next, it simplifies the mesh while respecting the chart boundaries. The
parametrization is re-optimized to reduce both stretch and deviation over the
whole PM sequence. Finally, the charts are packed into a texture atlas. We
demonstrate using such atlases to sample color and normal maps over several
models.

Keywords: adaptive tessellation, graphics hardware, interactive
rendering, ray tracing.

Abstract:
We introduce hybrid rendering, a scheme that dynamically ray
traces the local geometry of reflective and refractive objects, but approximates
more distant geometry by hardware-supported environment maps (EMs). To limit
computation, we use a greedy ray path shading model that prunes the binary ray
tree generated by refractive objects to form just two ray paths. We also
restrict ray queries to triangle vertices, but perform adaptive tessellation to
shoot additional rays where neighboring ray paths differ sufficiently. By using
layered, parameterized EMs that are inferred over a set of viewpoint samples to
match ray traced imagery, we accurately handle parallax and view-dependent
shading in the environment. We increase robustness of EMs by inferring them
simultaneously across multiple viewpoints and including environmental geometry
that is occluded from the viewpoint sample but is revealed in nearby viewpoints.
We demonstrate realistic shiny and glass objects with a user-controlled
viewpoint.

Abstract: Static environment maps fail to capture local reflections
including effects like self-reflections and parallax in the reflected imagery.
We instead propose parameterized environment maps (PEMs), a set of per-view
environment maps which accurately reproduce local reflections at each viewpoint
as computed by an offline ray tracer. Even with a small set of viewpoint
samples, PEMs support plausible movement away from and between the pre-rendered
viewpoint samples while maintaining local reflections. They also make use of
environment maps supported in graphics hardware to provide real-time exploration
of the pre-rendered space. In addition to parameterization by viewpoint, our
notion of PEM extends to general, multidimensional parameterizations of the
scene, including relative motions of objects and lighting changes.
Our contributions include a technique for inferring environment maps providing a
close match to ray-traced imagery. We also explicitly infer and encode all
MIPMAP levels of the PEMs to achieve higher accuracy. We propose layered
environment maps that separate local and distant reflected geometry. We explore
several types of environment maps including finite spheres, ellipsoids, and
boxes that better approximate the environmental geometry. We demonstrate results
showing faithful local reflections in an interactive viewer.

Abstract:
Aliasing is an important problem when rendering triangle meshes.
Efficient antialiasing techniques such as mipmapping greatly improve the
filtering of textures defined over a mesh. A major component of the remaining
aliasing occurs along discontinuity edges such as silhouettes, creases, and
material boundaries. Framebuffer supersampling is a simple remedy, but 2×2
super-sampling leaves behind significant temporal artifacts, while greater
supersampling demands even more fill-rate and memory. We present an alternative
that focuses effort on discontinuity edges by overdrawing such edges as
antialiased lines. Although the idea is simple, several subtleties arise.
Visible silhouette edges must be detected efficiently. Discontinuity edges need
consistent orientations. They must be blended as they approach the silhouette to
avoid popping. Unfortunately, edge blending results in blurriness. Our technique
balances these two competing objectives of temporal smoothness and spatial
sharpness. Finally, the best results are obtained when discontinuity edges are
sorted by depth. Our approach proves surprisingly effective at reducing temporal
artifacts commonly referred to as "crawling jaggies", with little added cost.

Abstract:
Many functions can map images to the sphere for use as
environment maps or spherical panoramas. We develop a new metric that
asymptotically measures how well these maps use a given number of samples to
provide the greatest worst-case frequency content of the image everywhere over
the sphere. Since this metric assumes perfect reconstruction filtering even with
highly anisotropic maps, we define another, conservative measure of sampling
efficiency that penalizes anisotropy using the larger singular value of the
mapping’s Jacobian. With these metrics, we compare spherical maps used
previously in computer graphics as well as other mappings from cartography, and
propose several new, simple mapping functions (dual equidistant and polar-capped
maps) that are significantly more efficient and exhibit less anisotropy. This is
true with respect to either efficiency metric, which we show agree in the worst
case for all but one of the spherical maps presented. Although we apply the
metrics to spherical mappings, they are useful for analyzing texture maps onto
any 3D surface.

Abstract:
Approximating detailed models with coarse, texture-mapped meshes
results in polygonal silhouettes. To eliminate this artifact, we introduce
silhouette clipping, a framework for efficiently clipping the rendering of
coarse geometry to the exact silhouette of the original model. The coarse mesh
is obtained using progressive hulls, a novel representation with the nesting
property required for proper clipping. We describe an improved technique for
constructing texture and normal maps over this coarse mesh. Given a perspective
view, silhouettes are efficiently extracted from the original mesh using a
precomputed search tree. Within the tree, hierarchical culling is achieved using
pairs of anchored cones. The extracted silhouette edges are used to set the
hardware stencil buffer and alpha buffer, which in turn clip and antialias the
rendered coarse geometry. Results demonstrate that silhouette clipping can
produce renderings of similar quality to high-resolution meshes in less
rendering time.

Abstract:
We generalize image-based rendering by exploiting
texture-mapping graphics hardware to decompress ray-traced “animations”. Rather
than 1D time, our animations are parameterized by two or more arbitrary
variables representing view/lighting changes and rigid object motions. To best
match the graphics hardware rendering to the input ray-traced imagery, we
describe a novel method to infer parameterized texture maps for each object by
modeling the hardware as a linear system and then performing least-squares
optimization. The parameterized textures are compressed as a multidimensional
Laplacian pyramid on fixed size blocks of parameter space. This scheme captures
the coherence in parameterized animations and, unlike previous work, decodes
directly into texture maps that load into hardware with a few, simple image
operations. We introduce adaptive dimension splitting in the Laplacian pyramid
and separate diffuse and specular lighting layers to further improve
compression. High-quality results are demonstrated at compression ratios up to
800:1 with interactive playback on current consumer graphics cards.

Abstract:
We present an efficient algorithm for visibility sorting a set
of moving geometric objects into a sequence of image layers which are composited
to produce the final image. Instead of splitting the geometry as in previous
visibility approaches, we detect mutual occluders and resolve them using an
appropriate image compositing expression or merge them into a single layer. Such
an algorithm has many applications in computer graphics; we demonstrate two:
rendering acceleration using image interpolation and visibility-correct depth of
field using image blurring.
We propose a new, incremental method for identifying mutually occluding sets of
objects and computing a visibility sort among these sets. Occlusion queries are
accelerated by testing on convex bounding hulls; less conservative tests are
also discussed. Kd-trees formed by combinations of directions in object or image
space provide an initial cull on potential occluders, and incremental collision
detection algorithms are adapted to resolve pairwise occlusions, when necessary.
Mutual occluders are further analyzed to generate an image compositing
expression; in the case of nonbinary occlusion cycles, an expression can always
be generated without merging the objects into a single layer. Results
demonstrate that the algorithm is practical for real-time animation of scenes
involving hundreds of objects each comprising hundreds or thousands of polygons.

Abstract:
For decades, animated cartoons and movie special effects have
factored the rendering of a scene into layers that are updated independently and
composed in the final display. We apply layer factorization to real-time
computer graphics. The layers allow targeting of resources, whether the ink and
paint artists of cartoons or the graphics pipeline as described here, to those
parts of the scene that are most important.
To take advantage of frame-to-frame coherence, we generalize layer factorization
to apply to both dynamic geometric objects and terms of the shading model,
introduce new ways to trade off fidelity for resource use in individual layers,
and show how to compute warps that reuse renderings for multiple frames. We
describe quantities, called fiducials, that measure the fidelity of
approximations to the original image. Layer update rates, spatial resolution,
and other quality parameters are determined by geometric, photometric,
visibility, and sampling fiducials weighted by the content author’s preferences.
We also compare the fidelity of various types of reuse warps and demonstrate the
suitability of the affine warp.
Using Talisman, a hardware architecture with an efficient layer primitive, the
work presented here dramatically improves the geometric complexity and shading
quality of scenes rendered in real-time.

Abstract:
We present a new method for accelerating the rendering of
complex static scenes. The technique is applicable to unstructured scenes
containing arbitrary geometric primitives and has sublinear asymptotic
complexity. Our approach is to construct a spatial hierarchy of cells over the
scene and to associate with each cell a simplified representation of its
contents. The scene is then rendered using a traversal of the hierarchy in which
a cell’s approximation is drawn instead of its contents if the approximation is
sufficiently accurate. We apply the method to several different scenes and
demonstrate significant speedups with little image degradation. We also exhibit
and discuss some of the artifacts that our approximation may cause.

Abstract:
We present a new method that utilizes path coherence to
accelerate walkthroughs of geometrically complex static scenes. As a
preprocessing step, our method constructs a BSP-tree that hierarchically
partitions the geometric primitives in the scene. In the course of a
walkthrough, images of nodes at various levels of the hierarchy are cached for
reuse in subsequent frames. A cached image is reused by texture-mapping it onto
a single quadrilateral that is drawn instead of the geometry contained in the
corresponding node. Visual artifacts are kept under control by using an error
metric that quantifies the discrepancy between the appearance of the geometry
contained in a node and the cached image. The new method is shown to achieve
speedups of an order of magnitude for walkthroughs of a complex outdoor scene,
with little or no loss in rendering quality.

Abstract:
In an effort to extend the sorts of lighting models used in
real-time computer graphics, this paper presents several area light source
models, including Lambertian and Phong illumination from constant and
cosine-falloff hemispherical light sources, constant subhemispherical light
sources, and constant polygonal light sources. The subhemispherical lighting
model can also be used to represent illumination from finite-distance spherical
light sources. The models are not unduly more expensive than the simple point
light source models, and are capable of real-time evaluation.

Abstract:
We present a surface representation and a set of algorithms that
allow interactive placement of curved parametric objects without
interpenetration. Using these algorithms, a modeler can place an object within
or on top of other objects, find a stable placement for it, and slide it into
new stable placements. Novel algorithms are presented to track points of contact
between bodies, detect new points of contact, and delete vanishing contacts.
Interactive speeds are maintained even when the moving body touches several
bodies at many contact points.
We describe a new algorithm that quickly brings a body into a stable
configuration with respect to a set of external forces, subject to the
constraint that it not penetrate a set of fixed bodies. This algorithm is made
possible by sacrificing the requirement that a body behave physically over time.
Intuitive control is still achieved by making incremental, "pseudo-physical"
changes to the body’s placement, while enforcing the non-interpenetration
constraint after each change.

Abstract: We present an efficient and robust algorithm for finding points
of collision between time-dependent parametric and implicit surfaces. The
algorithm detects simultaneous collisions at multiple points of contact. When
the regions of contact form curves or surfaces, it returns a finite set of
points uniformly distributed over each contact region.
Collisions can be computed for a very general class of surfaces: those for which
inclusion functions can be constructed. Included in this set are the familiar
kinds of surfaces and time behaviors encountered in computer graphics.
We use a new interval approach for constrained minimization to detect
collisions, and a tangency condition to reduce the dimensionality of the search
space. These approaches make interval methods practical for multi-point
collisions between complex surfaces. An interval Newton method based on the
solution of the interval linear equation is used to speed convergence to the
collision time and location. This method is more efficient than the Krawczyk–Moore
iteration used previously in computer graphics.

Abstract:
This paper discusses how interval analysis can be used to solve a wide variety
of problems in computer graphics. These problems include ray tracing,
interference detection, polygonal decomposition of parametric surfaces, and CSG
on solids bounded by parametric surfaces. Only two basic algorithms are
required: SOLVE, which computes solutions to a system of constraints, and
MINIMIZE, which computes the global minimum of a function, subject to a system
of constraints.
We present algorithms for SOLVE and MINIMIZE using interval analysis as the
conceptual framework. Crucial to the technique is the creation of “inclusion
functions” for each constraint and function to be minimized. Inclusion functions
compute a bound on the range of a function, given a similar bound on its domain,
allowing a branch and bound approach to constraint solution and constrained
minimization. Inclusion functions also allow the MINIMIZE algorithm to compute
global rather than local minima, unlike many other numerical algorithms.
Some very recent theoretical results are presented regarding existence and
uniqueness of roots of nonlinear equations, and global parameterizability of
implicitly described manifolds. To illustrate the power of the approach, the
basic algorithms are further developed into a new algorithm for the approximation of implicit curves.

Abstract:
This paper discusses a new, symbolic approach to geometric
modeling called generative modeling. The approach allows specification,
rendering, and analysis of a wide variety of shapes including 3D curves,
surfaces, and solids, as well as higher-dimensional shapes such as surfaces
deforming in time, and volumes with a spatially varying mass density. The system
also supports powerful operations on shapes such as “reparameterize this curve
by arclength”, “compute the volume, center of mass, and moments of inertia of
the solid bounded by these surfaces”, or “solve this constraint or ODE system”.
The system has been used for a wide variety of applications, including creating
surfaces for computer graphics animations, modeling the fur and body shape of a
teddy bear, constructing 3D solid models of elastic bodies, and extracting
surfaces from magnetic resonance (MR) data.
Shapes in the system are specified using a language which builds
multidimensional parametric functions. The language is based on a set of
symbolic operators on continuous, piecewise differentiable parametric functions.
We present several shape examples to show how conveniently shapes can be
specified in the system. We also discuss the kinds of operators useful in a
geometric modeling system, including arithmetic operators, vector and matrix
operators, integration, differentiation, constraint solution, and constrained
minimization. Associated with each operator are several methods, which compute
properties about the parametric functions represented with the operators. We
show how many powerful rendering and analytical operations can be supported with
only three methods: evaluation of the parametric function at a point, symbolic
differentiation of the parametric function, and evaluation of an inclusion
function for the parametric function.
Like CSG, and unlike most other geometric modeling approaches, this modeling
approach is closed, meaning that further modeling operations can be applied to
any results of modeling operations, yielding valid models. Because of this
closure property, the symbolic operators can be composed very flexibly, allowing
the construction of higher-level operators without changing the underlying
implementation of the system. Because the modeling operations are described
symbolically, specified models can capture the designer’s intent without
approximation error.

Keywords: filtering, rendering on graphics hardware, sampling,
video fields.

Abstract:
Lively and dynamic motion can make exciting computer-generated
images and animations. Typically, each individual image is rendered at a
single instant of time. For a single image of a moving subject, this
conveys no sense of motion. For an animation, this can result in extremely
objectionable strobing (time-aliasing) artifacts.
Aliasing in time can be reduced by motion blur; that is, by averaging or
filtering images at many instants of time to create a result in which moving
objects are blurred. Motion blur techniques for ray tracing are relatively
well understood, involving the association of different time values for each ray
cast (Cook et al., 1984). This paper presents a motion blur technique
suitable for graphics workstations having fast z-buffer rendering. The
essential result is fast, high-quality motion blur is achieved simply by
1) supersampling in time using a temporal box filter, and
2) computing images on fields, rather than frames, for video animations.

Foreword (by James T. Kajiya): The inquiry into representation,
manipulation, and analysis of shape with computers is one of the central themes
of computer graphics. In fact, one could say that computer graphics was
born when Ivan Sutherland had the idea that Geometry is Structured Data.
Before that, people merely used computers as automated graph paper; after that,
geometry could be manipulated by computer.
Modeling is an area where, every once in a while, a dazzling new idea will have
implications that are explored in years of hard research work, system building,
and eventually commercial products. The list of such ideas is short:
certainly included are homogeneous coordinates, Coons surfaces, constructive
solid geometry, and NURBS.
This book is about an idea that may be a candidate for that short list.
During the time I've watched John Snyder develop these ideas, my understanding
of shape has taken a profound transformation. John's work has taught me
that every shape has an inner logic — a way it should be thought about.
That logic is captured in a meta-shape.
Before this work, I thought about a shape as a collection of polygons, or a
sculpted surface. But that view is very limited. With a sculpted
surface, there's really no difference between a spoon shape and a chair shape;
it's all a matter of positioning the control points in the right places.
But a spoon shape has an inner logic shared by all spoons — and that logic is
completely different from that of a chair.
John and I have spent many hours trying to discover the logic of different
everyday shapes. It's an intellectually challenging and exciting endeavor,
one that is quite pleasurable when one hits on the right logic of a shape.
Because of this, shapes are no longer just inscrutable lumps, but a series of
puzzles— sometimes easy and sometimes difficult.
Do you want to read this book? If you have a serious interest in computer
graphics, it's for you. You should also have some experience in the trade.
If you've completed an introductory course, made some pictures, and attempted to
produce an ambitious picture or animation, you're experienced. Even if
your primary interest is in image synthesis or animation, modeling should be
important to you. There's an old saying at Caltech that has proved true
many times: "Good modeling can save bad rendering, but good rendering can't save
bad modeling."

Preface: The purpose of this book is to present a new, symbolic approach to shape representation that
I believe is useful to the CAD/CAM and computer graphics communities. This book primarily addresses two
audiences. First, it is intended for people who want to know how human beings can specify and manipulate shape.
I believe the representational abstraction described in this book to be elegant, powerful, and suitable for a
new generation of commercial CAD programs. The specific implementation I describe is just a first step toward the
realization of a truly capable and easy to use geometric modeling system. It is my hope that this first step
nevertheless demonstrates the merits of the representational abstraction, and will encourage its incorporation
in future CAD and animation programs.
Second, it is intended for people working in the area of computational geometry who are interested in a new, robust
class of algorithms for manipulating shapes. This class of algorithms employs interval analysis, and includes
algorithms for ray tracing, interference detection, polygonal decomposition of parametric surfaces,
CSG on solids bounded by parametric surfaces, and offset operations on parametric curves and surfaces.
These algorithms can be implemented separately from the modeling system described in this book. However,
the inclusion functions on which these algorithms depend are naturally implemented in the system described here,
with each symbolic operator having an inclusion function method, thus allowing inclusion functions for the
arbitrary composition of the operators. (This kind of includion function is termed a "natural interval extension"
in Chapter 5.) The result is that the algorithms can be applied to a great variety of shapes, while the
implementation remains simple.
What exactly is generative modeling and what good is it? Because of the novelty of the generative modeling approach
to geometric modeling, I believe some answer to this question should be given in the Preface, although it is
answered much more completely in Chapters 1 and 2.
Shapes in the generative modeling approach are represented as multidimensional, continuous, piecewise-differentiable
parametric functions. A set of symbolic operators on such functions is used to build complicated shapes by
simple composition of the symbolic operators. The book describes a language that allows the procedural specification
of shapes represented with these symbolic operators. Many shape examples are presented to show how conveniently shapes
can be specified using this language. My intent here is not to extol the particular syntax of the modeling language
presented, but instead to show the usefulness of modeling by composing high-level, symbolic operators. The
language described here was chosen because it makes these ideas concrete, and because it is part of a working system.
This book also discusses the kinds of operators useful in a geometric modeling system, including arithmetic operators,
vector and matrix operators, integration, differentiation, constraint solution, and constrained minimization. Associated
with each operator are several methods that compute properties about the parametric function represented with the
operators. This book shows how numerous rendering and analytical operations can be supported with only three methods:
evaluation of the parametric function at a point, symbolic differentiation of the parametric function, and evaluation
of an inclusion function for the parametric function (inclusion functions are defined in Chapter 5).
One advantage of the generative modeling approach is that it allows specification, rendering, and analysis of many kinds
of shapes, including the standard 3D curves, surfaces, and solids, as well as higher-dimensional shapes such as
surfaces deforming in time, and volumes with a spatially varying mass density. The approach also supports powerful,
high-level operations on shapes such as "reparameterize this curve by arclength", "compute the
volume, center of mass, and moments of inertia of the solid bounded by these surfaces", or "solve
this constraint or ODE system." An implementation of this approach, called GENMOD, has been used in our
computer graphics research lab at the California Institute of Technology for a wide variety of applications, including
creating surfaces for computer graphics animations, modeling the body shape and fur of a teddy bear, constructing
3D solid models of elastic bodies, and extracting models from MRI data. In fact, almost all the figures in this
book were made using GENMOD.
Unlike most other geometric modeling approaches, a further advantage of this approach is that it is closed, meaning
that modeling operations can be applied to any results of earlier modeling operations, yielding valid models.
Because of this closure property, the symbolic operators can be composed very flexibly, allowing the construction
of higher-level operators without changing the underlying implementation of the system. Because the modeling
operations are described symbolically, specified models can capture the designer's intent without approximation error.
The book is organized as follows: Chapter 1 examines how geometric modeling approaches may be compared, disucsses
some approaches used in the past and their problems, and introduces the generative modeling approach.
Chapter 2 answers the question, "What is a generative model?" and traces the developement of the generative
modeling approach. Chapter 3 presents a series of shape specification examples. Rendering of generative models
is treated in Chapter 4. Chapters 5 and 6 discuss how robust tools for the analysis of shape can be developed
using the technique of interval analysis. Inclusion functions, and their use in creating algorithms to solve
systems of constraints and constrained minimization probelsm, are treated in Chapter 5. These algorithms,
in turn, are applied to higher-level problems in geometric modeling, such as CSG, in Chapter 6. Appendix A provides
an overview of the GENMOD language. Appendix B presents more elaborate examples of GENMOD specifications that do not
fit conveniently in the text. Proofs of the interval analysis theorems used in Chapters 5 and 6 can be found in
Appendix C.
As the reader will undoubtedly realize, much of the work presnted here is research in progress. I therefore
have tried to point out areas for future improvement, as well as describe what has already been implemented.
Although much remains to be done, what has been done gives me confidence that this approach may lead to the
solution of many nagging problems in geometric modeling. It is my hope that the ideas are described adequately
enough to stimulate continuing, fruitful work in this are of symbolic, interval-based geometric modeling.

Keywords: lists and 3D grids, parametric surface, spatial data
structures, triangle mesh.

Abstract:
An approach to ray tracing complex models containing
mathematically defined surfaces is presented. Parametric and implicit
surfaces, and boolean combinations of these, are first tessellated into
triangles. The resulting triangles from many such surfaces are organized in a
hierarchy of lists and 3D grids, allowing efficient calculation of ray/model
intersections. The technique has been used to ray trace models containing
billions of triangles and surfaces never before ray traced. The organizing
scheme developed is also independently useful for efficiently ray tracing any
complex model, whether or not it contains surface tessellations.

Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear this
notice and the full citation on the first page. Copyrights for components of
this work owned by others than Eurographics must be honored.

IEEE Copyright Notice

This material is posted here with permission of the IEEE. Internal or personal
use of this material is permitted. However, permission to reprint/republish this
material for advertising or promotional purposes or for creating new collective
works for resale or redistribution must be obtained from the IEEE by writing to
pubs-permissions@ieee.org.