Li-wei's Graphics Projects

Concentric Mosaics

Abstract:
This paper presents a novel 3D plenoptic function, which we call concentric mosaics.  We constrain camera motion to planar concentric circles, and create concentric mosaics using a manifold mosaic for each circle (i.e. composing slit images taken at different locations).  Concentric mosaics index all input image rays naturally in 3 parameters: radius, rotation angle and vertical elevation.  Novel views are rendered by combining the appropriate captured rays in an efficient manner at rendering time.  Although vertical distortions exist in the rendered images, they can be alleviated by depth correction.  Like panaromas, concentric mosaics do not require recovering geometric and photometric scene models.  Moreover, concentric mosaics provide a much richer user experience by allowing the user to move freely in a circular region and observe significant parallax and lighting changes.  Compared with a Lightfield or Lumigraph, concentric mosaics have much smaller file size because only a 3D plenoptic function is constructed.  Concentric mosaics have good space and computational efficiency, and are very easy to capture.  This paper discusses a complete working system from capturing, construction, compression, to rendering of concentric mosaics from synthetic and real environments.

Online documents:
Demo application (For Pentium PC running Windows only, zipped 607 KB)
 
Data sets: toys (zipped 31.9 MB) and lobby (zipped 16.0 MB)
 
Demo video (Voiced by David Thiel in MPEG Format, 43.5 MB)
 
Harry Shum, Li-Wei He.  Rendering with Concentric Mosaics. Proceedings of SIGGRAPH 99, in Computer Graphics Proceedings, Annual Conference Series, 299-306, August 1999.

Layered Depth Images

Abstract:
In this paper we present a set of efficient image based rendering methods capable of rendering multiple frames per second on a PC.  The first method warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques.  A second method for more general scenes performs warping from an intermediate representation called a Layered Depth Image (LDI).  An LDI is a view of the scene from a single input camera view, but with multiple pixels along each line of sight.  The size of the representation grows only linearly with the observed depth complexity in the scene.  Moreover, because the LDI data are represented in a single image coordinate system, McMillan's warp ordering algorithm can be successfully adapted.  As a result, pixels are drawn in the output image in back-to-front order.  No z-buffer is required, so alpha-compositing can be done efficiently without depth sorting.  This makes splatting an efficient solution to the resampling problem.

Online documents:
Demo video (Voiced by Michael Cohen in Windows Media Format at 100K bps)
 
Jonathan Shade, Steven Gortler, Li-Wei He, Richard Szeliski.  Layered Depth Images. Proceedings of SIGGRAPH 98, in Computer Graphics Proceedings, Annual Conference Series, 231-242, July 1998.
 
Steven Gortler, Li-Wei He, Michael Cohen.  Rendering Layered Depth Images.  MSR-Tech-Report, 1997.

Rendering Realistic Trees with a Ray Tracer

Abstract:
Eng-Shien, Jeremy, and Li-Wei captured second place in the rendering competition with their modeling of natural scenes. The tree model was generated by a program they wrote that was based on Eric Haines' SPD code, with some additional randomization. The trunk and branches were modeled as cone segments, while the leaves were modeled as spheres, with texture and trim mapping. The single leaf used texture, bump, transparency, and trim maps to define its appearance. The scenes were initially inspired by the M.C. Escher engravings "Three Worlds" and "Dewdrop".

The lake was bump-mapped. The first picture has a fractal mountain in the background, and two pictures demonstrate depth of field effects. All of the pictures use environment mapping, which can especially be seen in the reflections off the dewdrop.

Online images:
Li-Wei He, Jeremy Henrickson, Eng-Shien Wu.  Tree In Lake and Dewdrop In Leaf, in CS348b Rendering Competition, Stanford University, Spring 1997.

The Virtual Cinematographer: a Paradigm for Automatic Real-Time Camera Control and Directing

Abstract:
This paper presents a paradigm for automatically generating complete camera specifications for capturing events in virtual 3D environments in real-time. We describe a fully implemented system, called the Virtual Cinematographer, and demonstrate its application in a virtual "party" setting. Cinematographic expertise, in the form of film idioms, is encoded as a set of small hierarchically organized finite state machines. Each idiom is responsible for capturing a particular type of scene, such as three virtual actors conversing or one actor moving across the environment. The idiom selects shot types and the timing of transitions between shots to best communicate events as they unfold. A set of camera modules, shared by the idioms, is responsible for the low-level geometric placement of specific cameras for each shot. The camera modules are also responsible for making subtle changes in the virtual actors' positions to best frame each shot. In this paper, we discuss some basic heuristics of filmmaking and show how these ideas are encoded in the Virtual Cinematographer.

Online documents:
Course slides (Given on May 29, 2000 in MSR Beijing in PowerPoint Format)
 
US Patent Number 6040841
 
Demo video (Voiced by Michael Cohen in Windows Media Format at 512K bps)
 
Li-Wei He, Michael F. Cohen, David H. Salesin. The virtual cinematographer: a paradigm for automatic real-time camera control and directing. Proceedings of SIGGRAPH 96, in Computer Graphics Proceedings, Annual Conference Series, 217-224, August 1996.

Declarative Camera Control for Automatic Cinematography

Abstract:
Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character's point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years.

In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

Online documents:
David B. Christianson, Sean. E. Anderson, Li-Wei He, Daniel S. Weld, Michael F. Cohen, David H. Salesin. Declarative camera control for automatic cinematography. Proceedings of AAAI '96 (Portland, OR), 148-155, 1996.
 
Li-Wei He, Sean E. Anderson, David H. Salesin, Daniel S. Weld and Michael F. Cohen.  Declarative Camera Control for Automatic Cinematography.  Department of Computer Science and Engineering Technical Report TR-95-01-03, University of Washington, 1995.