Gaze-enhanced speech recognition

Malcolm Slaney, Rahul Rajan, Andreas Stolcke, and Partha Parthasarathy

Abstract

This work demonstrates through simulations and experimental work the potential of eye-gaze data to improve speech-recognition results. Multimodal interfaces, where users see information on a display and use their voice to control an interaction, are of growing importance as mobile phones and tablets grow in popularity. We demonstrate an improvement in speech-recognition performance, as measured by word error rate, by rescoring the output from a large-vocabulary speech-recognition system. We use eye-gaze data as a spotlight and collect bigram word statistics near to where the user looks in time and space. We see a 25% relative reduction in the word-error rate over a generic language model, and approximately a 10% reduction in errors over a strong, page-specific baseline language model.

Details

Publication typeInproceedings
Published inProc. IEEE ICASSP
PublisherIEEE SPS
> Publications > Gaze-enhanced speech recognition