Efficient Subsampling for Training Complex Language Models

  • Puyang Xu ,
  • Asela Gunawardana ,
  • Sanjeev Khudanpur

Empirical Methods in Natural Language Processing |

Published by Association for Computational Linguistics

We propose an efficient way to train maximum entropy language models (MELM) and neural network language models (NNLM). The advantage of the proposed method comes from a more robust and efficient subsampling technique. The original multi-class language modeling problem is transformed into a set of binary problems where each binary classifier predicts whether or not a particular word will occur. We show that the binarized model is as powerful as the standard model and allows us to aggressively subsample negative training examples without sacrificing predictive performance. Empirical results show that we can train MELM and NNLM at 1% ∼ 5% of the standard complexity with no loss in performance.