What has been will be again, what has been done will be done again; there is nothing new under the sun -- Ecclesiastes 1:9.
While we walk on a street, we stare at a 3D scene and see the same objects from different views. In our lives, we look at our home, workplaces, colleagues, family members and friends every day. Even when we travel to different countries, this world consists of similar objects and scenes that we have seen before. Therefore, repetitiveness (also called sparsity) is an important characteristics of images and videos for visual computing.
Repetition of visual patterns across frames is established by means of correspondences, as pixels and objects move from frame to frame. In this talk, we will exploit dense correspondences for visual computing. We start with videos, where dense correspondences can be naturally formulated as motion. We show how accurate motion estimation can be used for video reconstruction (denoising, super resolution, deblocking), and discuss possible directions for long-range motion representation. We will move to a large set of images, where dense correspondences can be captured by SIFT flow, an algorithm that is able to align images across different scenes. Now that pixels and visual patterns are dense connected across images, we are able to treat a large collection of images just like videos, and perform both low-level image reconstruction and high-level image understanding on a large graph.