Context Dependent Recurrent Neural Network Language Model

Tomas Mikolov and Geoffrey Zweig

Abstract

Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance

by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate.

Details

Publication typeInproceedings
Published inSpoken Language Technologies
PublisherIEEE

Previous versions

Tomas Mikolov and Geoffrey Zweig. Context Dependent Recurrent Neural Network Language Model, July 2012.

> Publications > Context Dependent Recurrent Neural Network Language Model