Resolving Referring Expressions in Conversational Dialogs for Natural User Interfaces

  • Asli Celikyilmaz ,
  • Zhaleh Feizollahi ,
  • Dilek Hakkani-Tür ,
  • Ruhi Sarikaya

Published by EMNLP

Publication

Unlike traditional over-the-phone spoken dialog systems (SDSs), modern dialog systems tend to have visual rendering on the device screen as an additional modality to communicate the system’s response to the user. Visual display of the system’s response not only changes human behavior when interacting with devices, but also creates new research areas in SDSs. On-screen item identification and resolution in utterances is one critical problem to achieve a natural and accurate human-machine communication. We pose the problem as a classification task to correctly identify intended on-screen item(s) from user utterances. Using syntactic, semantic as well as context features from the display screen, our model can resolve different types of referring expressions with up to 90% accuracy. In the experiments we also show that the proposed model is robust to domain and screen layout changes.