Immerse in a spatial auditory scene and Capture crystal-clear audio
Zhengyou Zhang, Qin Cai, and Wei-ge Chen
About the System
In multiparty conferencing, one hears voices of more than one remote participants. Current commercial systems mix them into a single mono audio stream, and thus all voices of remote participants will sound like coming from the same location when using loudspeakers or from inside the listener's head when using headphones. This is in sharp contrast to what happens in real life where each voice has its distinct location. We have built and will demonstrate technologies to enhance user experience in multiparty conferencing using highly realistic and immersive spatial audio, with both loudspeakers and headphones. This is proven to significantly improve conferencing experience since each participant could easily differentiate the current remote talker and focus on the content being discussed.
Furthermore, Multichannel AEC (Acoustical Echo Cancellation) is a crucial component to enable quality audio experience during conferencing, especially without headset. We will also demonstrate the AEC capability in real time so that the remote side will only hear the near-end participant's speech without their own echoes.