New Technologies for Multi-Image Fusion

As video and still cameras have become almost ubiquitous, people are taking increasingly more photographs and videos of the world around them. Often, the photographer’s intent is to capture more than what can be seen in a single photograph, and he or she instead takes a large set of images or a video clip to capture a large scene or a moment that extends over time. One can combine these images to produce an output that improves the input images, such as creating an image with a large field of view, a panorama, or a composite image that takes the best parts of the image, a photo montage. But creating these results is still non-trivial for many users. One challenge is in creating large-scale panoramas, for which the capture and stitching times can be long. In addition, when using consumer-level point-and-shoot cameras and camera phones, artifacts such as motion blur appear. Another challenge is combining large image sets from photos or videos to produce results that use the best parts of the images to create an enhanced photograph. We will present several new technologies that advance the state of the art in these areas and create improved user experiences. For panorama generation, we will demonstrate: ICE 2.0. Stitching of panoramas from video. Generating sharp panoramas from blurry videos. For generating composites, we will demonstrate: Video to snapshots. De-noising and sharpening using lucky imaging.

Date:
Speakers:
Michael Cohen