Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Maximum Expected BLEU Training of Phrase and Lexicon Translation Models

Xiaodong He and Li Deng

Abstract

This paper proposes a new discriminative training method in constructing phrase and lexicon translation models. In order to reliably learn a myriad of parameters in these models, we propose a smoothed BLEU score-based utility function with KL regularization as the objective, and train the models on a large parallel dataset. For training, we derive growth transformations for phrase and lexicon translation probabilities to iteratively improve the objective. The proposed method, evaluated on the Europarl German-to-English dataset, leads to a 1.1 BLEU point improvement over a state-of-the-art baseline translation system. In IWSLT 2011 Benchmark, our system using the proposed method achieves the best Chinese-to-English translation result on the task of translating TED talks.

Details

Publication typeArticle
Published inProceedings of ACL
PublisherAssociation for Computational Linguistics
> Publications > Maximum Expected BLEU Training of Phrase and Lexicon Translation Models