Sanjeev Mehrotra, Wei-ge Chen, Zhengyou Zhang, and Philip A. Chou
With increasing computation power, network bandwidth, and improvements in display and capture technologies, fully immersive conferencing and tele-immersion is becoming ever closer to reality. Outside of video, one of the key components needed is high quality spatialized audio. This paper presents an implementation of a relatively low complexity, simple solution which allows realistic audio spatialization of arbitrary positions in a 3D video conference. When combined with pose tracking, it also allows the audio to change relative to which position on the screen the viewer is looking at.
|Published in||Int'l Conf. on Multimedia and Expo (ICME)|
© 2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.