Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Learning a Generative Model of Images by Factoring Appearance and Shape

Nicolas Le Roux, Nicolas Heess, Jamie Shotton, and John Winn

Abstract

Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system’s extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape.We then propose a generativemodel of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.

Details

Publication typeArticle
Published inNeural Computation
URLhttp://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00086
Pages593-650
Volume23
Number3
PublisherMIT Press
> Publications > Learning a Generative Model of Images by Factoring Appearance and Shape