Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
SPEED REGULARIZATION AND OPTIMALITY IN WORD CLASSING

Geoffrey Zweig and Konstantin Makarychev

Abstract

Word-classing has been used in language modeling for two distinct purposes: to improve the likelihood of the language model, and to improve the runtime speed. In particular, frequency-based heuristics have been proposed to improve the speed of recurrent neural network language models (RNNLMs). In this paper, we present a dynamic programming algorithm for determining classes in a way that provably minimizes the runtime of the resulting class-based language models. However, we also find that the speed-based methods degrade the perplexity of the language models by 5-10% over traditional likelihood-based classing. We remedy this via the introduction of a speed-based regularization term in the likelihood objective function. This achieves a runtime close to that of the speed based methods without loss in perplexity performance. We demonstrate these improvements with both an RNN-LM and the Model M exponential language model, for three different tasks involving two different languages.

Details

Publication typeInproceedings
Published inICASSP
PublisherIEEE
> Publications > SPEED REGULARIZATION AND OPTIMALITY IN WORD CLASSING