Puyang Xu, Asela Gunawardana, and Sanjeev Khudanpur
We propose an efﬁcient way to train maximum entropy language models (MELM) and neural network language models (NNLM). The advantage of the proposed method comes from a more robust and efﬁcient subsampling technique. The original multi-class language modeling problem is transformed into a set of binary problems where each binary classiﬁer predicts whether or not a particular word will occur. We show that the binarized model is as powerful as the standard model and allows us to aggressively subsample negative training examples without sacriﬁcing predictive performance. Empirical results show that we can train MELM and NNLM at 1% ∼ 5% of the standard complexity with no loss in performance.
|Published in||Empirical Methods in Natural Language Processing|
|Publisher||Association for Computational Linguistics|