Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Randomized Maximum Entropy Language Models

Puyang Xu, Sanjeev Khudanpur, and Asela Gunawardana


We address the memory problem of maximum entropy language models(MELM) with very large feature sets. Randomized techniques are employed to remove all large, exact data structures in MELM implementations. To avoid the dictionary structure that maps each feature to its corresponding weight, the feature hashing trick can be used. We also replace the explicit storage of features with a Bloom filter. We show with extensive experiments that false positive errors of Bloom filters and random hash collisions do not degrade model performance. Both perplexity and WER improvements are demonstrated by building MELM that would otherwise be prohibitively large to estimate or store.


Publication typeInproceedings
Published inAutomatic Speech Recognition and Understanding
PublisherIEEE SPS
> Publications > Randomized Maximum Entropy Language Models