Speaker Jacob Chakareski
Affiliation University of Alabama
Host Philip Chou
Date recorded 21 February 2014
Multi-Camera systems have enabled a plethora of novel applications spanning entertainment, remote monitoring, and telecommuting. A key component in all of them is the interactive delivery of different viewpoint perspectives of the 3D scene of interest that creates the sensation of immersion within the scene for the user. This requires intelligent and adaptive processing of the captured content in response to the user’s actions of switching between different viewpoints, while addressing the challenges imposed by the time-varying nature of the communication channel at the same time. The talk will describe a novel user-action-driven framework for joint view and rate scalable encoding of multi-camera video signals and its prospective applications. Its building blocks comprise a scalable encoder, optimization formulation for view and rate scalable encoding, and probabilistic user behavior model. I will examine their key properties and how they fit coherently within the overall system. Through experiments, I will demonstrate its advantages over competing methods. If time permits, I would like to conclude with a brief overview of a few other investigations of context-driven computer communication.
©2014 Microsoft Corporation. All rights reserved.