Learning through abstract scenes

 

   


Abstract

  

This project explores the use of abstract scenes created from clip art to:

Generating and comprehending sentences describing our visual world remains an open and challenging area of research in artificial intelligence. The way we describe a scene depends on our prior knowledge about the world, the presence of objects in the observed scene, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from images, which is in general a difficult and yet unsolved problem. Abstract scenes allow us to avoid these difficulties, which enables us to directly study the problems of semantic scene understanding, gathering common sense knowledge and learning scene dynamics.

   


People

   

Larry Zitnick (Microsoft Research)

Devi Parikh (Virginia Tech)

Lucy Vanderwende (Microsoft Research)

David Fouhey (Carnegie Mellon University)

 


Abstract Scenes Datasets

    

 

Abstract Scenes Dataset v1                         Average Scenes                       

Version 1.1 - Released February 2014

Contains data from both the CVPR 2013 and ICCV 2013 papers.

[Readme] [Download] [Demo Javascript] [Example classes][Average Scenes]

     


Publications

  

Bringing Semantics Into Focus Using Visual Abstraction

IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013 (Oral)

C. L. Zitnick and D. Parikh

[slides] [CVPR talk (video)][Dataset]      [SUN workshop slides] [talk (video) MSR Faculty Summit]

 

Learning the Visual Interpretation of Sentences

IEEE International Conference on Computer Vision (ICCV), 2013

C. L. Zitnick, D. Parikh, and L. Vanderwende

[Supplementary material] [Dataset]

 

Predicting Object Dynamics in Scenes

IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014

D. Fouhey and C. L. Zitnick

[Supplementary material]