P. H. S. Torr, Richard Szeliski, and P. Anandan
This paper describes a Bayesian approach for modeling 3D scenes as a collection of approximately planar layers that are arbitrarily positioned and oriented in the scene. In contrast to much of the previous work on layer based motion modeling, which compute layered descriptions of 2D image motion, our work leads to a 3D description of the scene. We focus on the key problem of automatically segmenting the scene into layers based on stereo disparity data from multiple images. The prior assumptions about the scene are formulated within a Bayesian decision making framework, and are then used to automatically determine the number of layers and the assignment of individual pixels to layers. Although using a collection of 3D layers has been previously proposed as an e±cient and e®ective representation for multimedia applications, results to date have relied on hand segmentation. In contrast, the work described here aims at getting the best automatic segmentation that is possible based on disparity and color data alone.
|Published in||Seventh International Conference on Computer Vision (ICCV'99)|
|Publisher||IEEE Computer Society|
Copyright © 2007 IEEE. Reprinted from IEEE Computer Society. This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to email@example.com. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.