Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Minimum Divergence Based Discriminative Training

Jun Du, Peng Liu, Frank Soong, Jian-Lai Zhou, and Ren-Hua Wang


We propose to use Minimum Divergence(MD) as a new measure of errors in discriminative training. To focus on improving discrimination between any two given acoustic models, we refine the error definition in terms of Kullback-Leibler Divergence (KLD) between them. The new measure can be regarded as a modified version of Minimum Phone Error (MPE) but with a higher resolution than just a symbol matching based criterion. Experimental recognition results show the new MD based training yields relative word error rate reductions of 57.8% and 6.1% on TIDigits and Switchboard databases, respectively, in comparing with the ML trained baseline systems. The recognition performance of MD is also shown to be consistently better than that of MPE.


Publication typeProceedings
Published inProc. of INTERSPEECH 2006
PublisherInternational Speech Communication Association
> Publications > Minimum Divergence Based Discriminative Training