Li Deng, Dong Yu, Xiaolong Li, and Alex Acero
We present a structured speech model that is equipped with the capability of jointly representing incomplete articulation and long-span co-articulation in natural human speech. Central to this model is compact statistical parameterization of the highly regular dynamic patterns (exhibited in the hidden vocal-tract-resonance domain) that are driven by the stochastic segmental targets. We provide a rigorous mathematical description of this model, and present novel algorithms for learning the full set of model parameters using the cepstral data of speech. In particular, the gradient ascend techniques for learning variance parameters (for both resonance targets and cepstral prediction residuals) are described in detail. Phonetic recognition experiments are carried out using two paradigms of N-best rescoring and lattice search. Both sets of results demonstrate higher recognition accuracy achieved by the new model compared with the best HMM system. The higher accuracy is consistently observed, with and without combining HMMscores, and with and without including the references in the N-best lists and lattices. Further, the new model with rich parameter-free structure uses only the context-independent, singlemodal Gaussian parameters, which are fewer than one percent of the parameters in the context-dependent HMM system with mixture distributions.
|Published in||Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding|
|Publisher||Institute of Electrical and Electronics Engineers, Inc.|
© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.