A Study on Soft Margin Estimation for LVCSR

We extend our previous work on soft margin estimation (SME) to large vocabulary continuous speech recognition in two aspects. The first is to use the extended Baum-Welch method to replace the conventional generalized probabilistic descent algorithm for optimization. The second is to compare SME with minimum classification error (MCE) training with the same implementation details in order to show that it is indeed the margin component in the objective function with margin-based utterance and frame selection that contributes to the success of SME. Tested on the 5 k-word Wall Street Journal task, all the SME methods work better than MCE. The best SME approach achieves a relative word error rate reduction of about 19% over our best baseline performance. This enhancement can only be demonstrated because of our use of margin-based objective function and the extended Baum-Welch parameter optimization method.

In  IEEE Workshop on Automatic Speech Recognition & Understanding, 2007, ASRU 2007

Publisher  IEEE
© 2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. http://www.ieee.org/

Details

TypeInproceedings
URLhttp://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4430122&isnumber=4430068
Pages268-271
SeriesASRU 2007
> Publications > A Study on Soft Margin Estimation for LVCSR