Speech Utterance Classification Model Training without Manual Transcriptions

Ye-Yi Wang, John Lee, and Alex Acero

Abstract

Speech utterance classification has been widely applied to a variety of spoken language understanding tasks, including call routing, dialog systems, and command and control. Most speech utterance classification systems adopt a data-driven statistical learning approach, which requires manually transcribed and annotated training data. In this paper we introduce a novel classification model training approach based on unsupervised language model adaptation. It only requires wave files of the training speech utterances and their corresponding classification destinations for modeling training. No manual transcription of the utterances is necessary. Experimental results show that this approach, which is much cheaper to implement, has achieved classification accuracy at the same level as the model trained with manual transcriptions

Details

Publication typeInproceedings
Published inIEEE International Conference on Acoustics, Speech and Signal Processing
PagesI-553-556
Volume1
AddressRoulouse, France
PublisherInstitute of Electrical and Electronics Engineers, Inc.
> Publications > Speech Utterance Classification Model Training without Manual Transcriptions