Xiaolong Li, Yun-Cheng Ju, Li Deng, and Alex Acero
18 April 2007
Recently, there has been a rapidly increasing interest in using ASR for children’s language learning. An Automatic Reading Tutor system built with ASR technologies can track children’s oral reading against story texts, detect reading miscues, and measure the level of reading fluency. They may even diagnose the nature of the miscues and provide feedback to improve reading skills. In such tasks, N-gram language models (LM) may be trained from the whole story text, or may be generated based on current story sentence with heuristic probabilities for both regular words in the sentence and explicitly predicted reading miscues. The disadvantages of those methods are either they require a relatively large text and are time-consuming, or a large-sized LM and complex processing are needed to accommodate all possible words in reading stories as well as in reading miscues. This paper proposes an efficient and robust LM which can be easily built on-the-fly with current reading sentences. With an additional parallel “garbage” model, the LM can also deal effectively with a wide range of reading miscues. Our experiments in a standard children’s reading task show that the new LM reaches the state-of-the-art performance in detecting reading miscues with a fast speed while only a relatively simple children’s acoustic model of speech was used.
|Published in||Proceedings of IEEE Internaltional Conference on Acoustics, Speech and Signal Processing (ICASSP)|
|Publisher||Institute of Electrical and Electronics Engineers, Inc.|
© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.