Minimum Divergence Based Discriminative Training

  • Jun Du ,
  • Peng Liu ,
  • Frank Soong ,
  • Jian-Lai Zhou ,
  • Ren-Hua Wang

Proc. of INTERSPEECH 2006 |

Published by International Speech Communication Association

We propose to use Minimum Divergence(MD) as a new measure of errors in discriminative training. To focus on improving discrimination between any two given acoustic models, we refine the error definition in terms of Kullback-Leibler Divergence (KLD) between them. The new measure can be regarded as a modified version of Minimum Phone Error (MPE) but with a higher resolution than just a symbol matching based criterion. Experimental recognition results show the new MD based training yields relative word error rate reductions of 57.8% and 6.1% on TIDigits and Switchboard databases, respectively, in comparing with the ML trained baseline systems. The recognition performance of MD is also shown to be consistently better than that of MPE.