Intelligent Selection of Language Model Training Data

We address the problem of selecting nondomain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the entropy according to domain-specific and non-domain-specifc language models for sentences of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than either random data selection or a previous method based on measuring perplexity according to a domain-specific language model.

In  Proceedings of the ACL 2010 Conference Short Papers

Publisher  Association for Computational Linguistics


AddressUppsala, Sweden
> Publications > Intelligent Selection of Language Model Training Data