A Long-Contextual-Span Model of Resonance Dynamics for Speech Recognition: Parameter Learning and Recognizer Evaluation

  • Li Deng ,
  • Dong Yu ,
  • Xiaolong(Shiao-Long) Li ,
  • Alex Acero

Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding |

Published by Institute of Electrical and Electronics Engineers, Inc.

We present a structured speech model that is equipped with the capability of jointly representing incomplete articulation and long-span co-articulation in natural human speech. Central to this model is compact statistical parameterization of the highly regular dynamic patterns (exhibited in the hidden vocal-tract-resonance domain) that are driven by the stochastic segmental targets. We provide a rigorous mathematical description of this model, and present novel algorithms for learning the full set of model parameters using the cepstral data of speech. In particular, the gradient ascend techniques for learning variance parameters (for both resonance targets and cepstral prediction residuals) are described in detail. Phonetic recognition experiments are carried out using two paradigms of N-best rescoring and lattice search. Both sets of results demonstrate higher recognition accuracy achieved by the new model compared with the best HMM system. The higher accuracy is consistently observed, with and without combining HMMscores, and with and without including the references in the N-best lists and lattices. Further, the new model with rich parameter-free structure uses only the context-independent, singlemodal Gaussian parameters, which are fewer than one percent of the parameters in the context-dependent HMM system with mixture distributions.