Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Subspace Gaussian Mixture Models for Speech Recognition

Daniel Povey, Lukas Burget, Mohit Agarwal, Pinar Akyazi, Kai Feng, Arnab Ghoshal, Ondrej Glembek, Nagendra Kumar Goel, Martin Karafiat, Ariya Rastrow, Richard C. Rose, Petr Schwarz, and Samuel Thomas


We describe an acoustic modeling approach in which all phonetic states share a common Gaussian Mixture Model structure, and the means and mixture weights vary in a subspace of the total parameter space. We call this a Subspace Gaussian Mixture Model (SGMM). Globally shared parameters define the subspace. This style of acoustic model allows for a much more compact representation and gives better results than a conventional modeling approach, particularly with smaller amounts of training data.


Publication typeInproceedings
Published inICASSP
> Publications > Subspace Gaussian Mixture Models for Speech Recognition