Discriminative Training of Context-Dependent Language Model Scaling Factors and Interpolation Weights

Proc. ASRU |

Published by IEEE - Institute of Electrical and Electronics Engineers

We demonstrate how context-dependent language model scaling factors and interpolation weights can be unified in a single formulation where free parameters are discriminatively trained using linear and non-linear optimization. Objective functions of the optimization are defined based on pairs of superior and inferior recognition hypotheses and correlate well with recognition error metrics. Experiments on a large, real world application demonstrated the effectiveness of the solution in significantly reducing recognition errors, by leveraging the benefits of both context-dependent weighting and discriminative training.