Dong Yu, Mike Seltzer, Jinyu Li, Jui-Ting Huang, and Frank Seide
Recent studies have shown that deep neural networks (DNNs) perform signiﬁ- cantly better than shallow networks and Gaussian mixture models (GMMs) on large vocabulary speech recognition tasks. In this paper, we argue that the im- proved accuracy achieved by the DNNs is the result of their ability to extract dis- criminative internal representations that are robust to the many sources of variabil- ity in speech signals. We show that these representations become increasingly in- sensitive to small perturbations in the input with increasing network depth, which leads to better speech recognition performance with deeper networks. We also show that DNNs cannot extrapolate to test samples that are substantially differ- ent from the training examples. If the training data are sufﬁciently representative, however, internal features learned by the DNN are relatively stable with respect to speaker differences, bandwidth differences, and environment distortion. This enables DNN-based recognizers to perform as well or better than state-of-the-art systems based on GMMs or shallow networks without the need for explicit model adaptation or feature normalization.
|Published in||International Conference on Learning Representations|