Learning a Generative Model of Images by Factoring Appearance and Shape

Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system’s extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape.We then propose a generativemodel of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.

NECO_a_00086.pdf
PDF file

In  Neural Computation

Publisher  MIT Press

Details

TypeArticle
URLhttp://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00086
Pages593-650
Volume23
Number3
> Publications > Learning a Generative Model of Images by Factoring Appearance and Shape