Kalman Filter-based Algorithms for Estimating Depth from Image Sequences

Larry H. Matthies, Richard Szeliski, and Takeo Kanade

Abstract

Using known camera motion to estimate depth from image sequences is an important problem in robot vision. Many applications of depth from motion, including navigation and manipulation, require algorithms that can estimate depth in an on-line, incremental fashion. This requires a representation that describes the uncertainty in depth estimates and a mechanism that integrates new measurements with existing depth estimates to reduces the uncertainty over time. Kalman filtering provides such a mechanism. Kalman filtering has recently been proposed as a mechanism for obtaining on-line estimates of depth from motion sequences. Previous applications of Kalman filtering to depth from motion have been limited to estimating depth at the location of a sparse set of features. In this paper, we introduce a new, pixel-based (iconic) algorithm that estimates depth and depth uncertainty at each pixel and incrementally refines these estimates over time. We describe the algorithm for translations parallel to the image plane and contrast its formulation and performance to that of a feature-based Kalman filtering algorithm. We compare the performance of the two approaches by analyzing their theoretical convergence rates, by conducting quantitative experiments with images of a flat poster, and by conducting qualitative experiments with images of a realistic outdoor scene model. The results show that the new method is an effective way to extract depth from lateral camera translations and suggest that it will play an important role in low-level vision.

Details

Publication typeArticle
Published inInt. J. Computer Vision
Pages209–236
Volume3
Number3
PublisherKluwer Academic
> Publications > Kalman Filter-based Algorithms for Estimating Depth from Image Sequences