Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Soft Margin Estimation with Various Separation Levels for LVCSR

J. Li , Z. Yan, C. -H. Lee, and and R. -H. Wang


We continue our previous work on soft margin estimation (SME) to large vocabulary continuous speech recognition (LVCSR) in two new aspects. The first is to formulate SME with different unit separation. SME methods focusing on string-, word-, and phone-level separation are defined. The second is to compare SME with all the popular conventional discriminative training (DT) methods, including maximum mutual information estimation (MMIE), minimum classification error (MCE), and minimum word/phone error (MWE/MPE). Tested on the 5k-word Wall Street Journal task, all the SME methods achieves a relative word error rate (WER) reduction from 17% to 25% over our baseline. Among them, phone-level SME obtains the best performance. Its performance is slightly better than that of MPE, and much better than those of other conventional DT methods. With the comprehensive comparison with conventional DT methods, SME demonstrates its success on LVCSR tasks.


Publication typeInproceedings
Published inProc. Interspeech
> Publications > Soft Margin Estimation with Various Separation Levels for LVCSR