Soft Margin Estimation with Various Separation Levels for LVCSR

Jinyu Li, Zhi-Jie Yan, Chin-Hui Lee, and Ren-Hua Wang

Abstract

We continue our previous work on soft margin estimation (SME) to large vocabulary continuous speech recognition (LVCSR) in two new aspects. The first is to formulate SME with different unit separation. SME methods focusing on string-, word-, and phone-level separation are defined. The second is to compare SME with all the popular conventional discriminative training (DT) methods, including maximum mutual information estimation (MMIE), minimum classification error (MCE), and minimum word/phone error (MWE/MPE). Tested on the 5k-word Wall Street Journal task, all the SME methods achieves a relative word error rate (WER) reduction from 17% to 25% over our baseline. Among them, phone-level SME obtains the best performance. Its performance is slightly better than that of MPE, and much better than those of other conventional DT methods. With the comprehensive comparison with conventional DT methods, SME demonstrates its success on LVCSR tasks.

Details

Publication typeInproceedings
Published in9th Annual Conference of the International Speech Communication Association, InterSpeech 2008
Pages269-273
SeriesInterSpeech 2008
PublisherInternational Speech Communication Association
> Publications > Soft Margin Estimation with Various Separation Levels for LVCSR