Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Intelligent Selection of Language Model Training Data

Robert C. Moore and William Lewis


We address the problem of selecting nondomain-specific language model training data to build auxiliary language models for use in tasks such as machine translation. Our approach is based on comparing the entropy according to domain-specific and non-domain-specifc language models for sentences of the text source used to produce the latter language model. We show that this produces better language models, trained on less data, than either random data selection or a previous method based on measuring perplexity according to a domain-specific language model.


Publication typeInproceedings
Published inProceedings of the ACL 2010 Conference Short Papers
AddressUppsala, Sweden
PublisherAssociation for Computational Linguistics
> Publications > Intelligent Selection of Language Model Training Data