Efficient Subsampling for Training Complex Language Models

We propose an efficient way to train maximum entropy language models (MELM) and neural network language models (NNLM). The advantage of the proposed method comes from a more robust and efficient subsampling technique. The original multi-class language modeling problem is transformed into a set of binary problems where each binary classifier predicts whether or not a particular word will occur. We show that the binarized model is as powerful as the standard model and allows us to aggressively subsample negative training examples without sacrificing predictive performance. Empirical results show that we can train MELM and NNLM at 1% ∼ 5% of the standard complexity with no loss in performance.

binarylm_final.pdf
PDF file

In  Empirical Methods in Natural Language Processing

Publisher  Association for Computational Linguistics

Details

TypeInproceedings
> Publications > Efficient Subsampling for Training Complex Language Models