Feature Learning in Deep Neural Networks - Studies on Speech Recognition

Recent studies have shown that deep neural networks (DNNs) perform signifi-

cantly better than shallow networks and Gaussian mixture models (GMMs) on

large vocabulary speech recognition tasks. In this paper, we argue that the im-

proved accuracy achieved by the DNNs is the result of their ability to extract dis-

criminative internal representations that are robust to the many sources of variabil-

ity in speech signals. We show that these representations become increasingly in-

sensitive to small perturbations in the input with increasing network depth, which

leads to better speech recognition performance with deeper networks. We also

show that DNNs cannot extrapolate to test samples that are substantially differ-

ent from the training examples. If the training data are sufficiently representative,

however, internal features learned by the DNN are relatively stable with respect

to speaker differences, bandwidth differences, and environment distortion. This

enables DNN-based recognizers to perform as well or better than state-of-the-art

systems based on GMMs or shallow networks without the need for explicit model

adaptation or feature normalization.

DNN-Representation-ICLR2013.pdf
PDF file

In  International Conference on Learning Representations

Details

TypeInproceedings
> Publications > Feature Learning in Deep Neural Networks - Studies on Speech Recognition