Ye-Yi Wang, John Lee, and Alex Acero
Speech utterance classification has been widely applied to a variety of spoken language understanding tasks, including call routing, dialog systems, and command and control. Most speech utterance classification systems adopt a data-driven statistical learning approach, which requires manually transcribed and annotated training data. In this paper we introduce a novel classification model training approach based on unsupervised language model adaptation. It only requires wave files of the training speech utterances and their corresponding classification destinations for modeling training. No manual transcription of the utterances is necessary. Experimental results show that this approach, which is much cheaper to implement, has achieved classification accuracy at the same level as the model trained with manual transcriptions
In IEEE International Conference on Acoustics, Speech and Signal Processing
Publisher Institute of Electrical and Electronics Engineers, Inc.
© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.