The Graphics group at Microsoft Research is exploring new graphics representations and algorithms that take advantage of existing and upcoming hardware features to heighten the quality of real-time computer graphics. This page lists graphics projects not having their own project page. See the "Graphics Projects" list on the right or Graphics home for a complete list of all our projects.
Efficient Processing of Sampled Geometric Surfaces and Signals
Not all content can be described procedurally – it’s often useful to sample and tabulate data. Regular grids are especially attractive because they exploit the texture mapping capability of graphics hardware and because signal processing on this simple structure is well understood. Texture maps also decouple sampling of the object’s shape from the associated signal, such as how its color or normal varies. The graphics group is exploring parameterization techniques to represent surface signals as texture maps that are as small as possible. We have investigated a diverse set of signals allowing many shading effects, including fur, hatching, and geometric detail with precomputed shading effects. We can even represent the geometry itself as a 2D image, called a geometry image.
Lapped Textures use a Poisson-like surface parameterization metric to align a user-specified surface direction field with the axes of the texture domain. See [lapped].
Texture Mapping Progressive Meshes shows that all meshes in a progressive mesh sequence can be made to share a common texture parameterization, and introduces a texture-stretch metric to minimize undersampling in the resulting parameterization. See [tmpm].
Signal-Specialization Parameterization extends the parameterization stretch metric and optimization algorithm to create a texture atlas optimized to a particular signal, thereby allowing a more faithful resampled approximation for the same texture map size. See [ssp][ssplinear].
Exact and Approximate Surface Geodesics are an important component for improving the quality of parameterized surface charts, both in terms of their boundaries and their interior distortion. See [geodesics].
Isocharts combine ideas from signal-specialized stretch with recent techniques in nonlinear dimension reduction (isomap) to automatically produce compact texture atlases for arbitrary meshes. See [isocharts].
Geometry Images replace irregular triangle meshes with a completely regular structure -- a simple 2D array of quantized points. Surface signals like normals and colors are stored in similar 2D arrays using the same implicit surface parameterization — texture coordinates are absent. Geometry images can be encoded using traditional image compression algorithms, such as wavelet-based coders. See [gim][mcgim][sgim].
Geometry Clipmaps cache the terrain in a set of nested regular grids centered about the viewer. This simple framework provides visual continuity, uniform frame rate, complexity throttling, and graceful degradation. Moreover it allows two new exciting real-time functionalities: decompression and synthesis. Our main dataset is a 40GB height map of the United States. A compressed image pyramid reduces the size by a remarkable factor of 100, so that it fits entirely in memory. This compressed data also contributes normal maps for shading. As the viewer approaches the surface, we synthesize grid levels finer than the stored terrain using fractal noise displacement. Decompression, synthesis, and normal-map computations are incremental, thereby allowing interactive flight at 60 frames/sec. See .
Random-Access GPU Data Structures
Perfect Spatial Hashing packs sparse data into a compact table while retaining efficient random access. Because our hash function makes a single reference to a small offset table, queries always involve exactly two memory accesses and are thus ideally suited for parallel SIMD evaluation on graphics hardware. Applications include vector images, texture sprites, alpha channel compression, 3D-parameterized textures, 3D painting, simulation, and collision detection. See [perfecthash].
The idea of procedural and sampled representations can also be combined; texture synthesis is an example. The idea is to record how the signal varies locally via a small 2D example and automatically synthesize a similar version over a larger 2D area or a 3D surface. Previous approaches grow the texture by successively inserting small patches: an inherently slow and serial process. Our work has developed fast, parallel solutions.
Parallel controllable texture synthesis operates in real-time on a GPU. Texture variation is achieved by multiresolution jittering of exemplar coordinates. Combined with the local support of parallel synthesis, the jitter enables intuitive user controls including multiscale randomness, spatial modulation over both exemplar and output, feature drag-and-drop, and periodicity constraints. We also introduce synthesis magnification, a fast method for amplifying coarse synthesis results to higher resolution. See [paratexsyn].
Appearance-space texture synthesis shows that texture synthesis quality is greatly improved if pointwise colors are replaced by appearance vectors that incorporate nonlocal information such as feature and radiance-transfer data. We perform dimensionality reduction on these vectors prior to synthesis, to create a new appearance-space exemplar. Synthesis in this information-rich space lets us reduce runtime neighborhood vectors from 5x5 grids to just 4 locations. We introduce novel techniques for coherent anisometric synthesis, surface texture synthesis directly in an ordinary atlas, and texture advection. Remarkably, we achieve all these functionalities in real-time, or 3 to 4 orders of magnitude faster than prior work. See [apptexsyn].
Surface tangent vector fields are an essential ingredient in controlling surface appearance for applications ranging from anisotropic shading to texture synthesis and non-photorealistic rendering. We present a simple framework to efficiently create smooth tangent vector fields that follow constraints interactively sketched by a user. See [vfdesign].
Often, detailed models are best acquired by scanning physical objects, for instance using laser range scanners. Converting the 3D points into useful surface geometry is referred to as surface reconstruction. See [thesis][recon][psrecon].
Poisson surface reconstruction considers all the oriented points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Our Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. See [poissonrecon].
Multilevel streaming for out-of-core surface reconstruction allows the reconstruction of huge datasets, far too large to fit into memory. We realize a cascading multigrid algorithm as a single sweep across the data, by simultaneously advancing through the multiple data streams associated with different levels of a sparse octree. See [mlstream].
Displaced Subdivision Surfaces represent a detailed surface model as a scalar-valued displacement over a smooth domain surface. The representation defines both the domain surface and the displacement function using a unified subdivision framework, allowing for simple and efficient evaluation of analytic surface properties. The encoding of fine detail as a scalar function makes the representation extremely compact. See [dss].
- Real-Time Soft Global Illumination
- Procedural Geometric Representations and Rendering
- The Polynomial Catalog
- GPU Vector Art Rendering
- GPU Rendering of Piecewise Algebraic Surfaces
- Approximating Subdivision Surfaces for Hardware Tessellation
- Efficient Processing of Sampled Geometric Surfaces and Signals
- Random-Access GPU Data Structures
- Texture Synthesis
- Surface Reconstruction