Understanding Computer-Directed Utterances in Multi-User Dialog Systems

This work aims to understand user requests when multiple users are interacting with each other and a spoken dialog system. More specifically, we explore the use of multi-human conversational context to improve domain detection in a human-computer interaction system. We investigate the different effects of human-directed context and computer-directed context, and compare the impact of using different context window sizes. Furthermore, we employ topic segmentation to chunk conversations for determining context boundaries. The experimental results show that the use of conversational context helps reduce domain detection error rate, especially in some specific domains. And though computer directed context is more reliable, the results show that the combination of both computer and human addressed utterances within a reasonable window size performs the best.

ICASSP13-MultiHuman.pdf
PDF file

Publisher  IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

Details

TypeInproceedings
> Publications > Understanding Computer-Directed Utterances in Multi-User Dialog Systems