Andreas Stolcke, Jing Zheng, Wen Wang, and Victor Abrash
We review developments in the SRI Language Modeling Toolkit (SRILM) since 2002, when a previous paper on SRILM was published. These developments include measures to make training from large data sets more efficient, to implement additional language modeling techniques (such as for adaptation and smoothing), and for client/server operation. In addition, the functionality for lattice processing has been greatly expanded. We also highlight several external contributions and notable applications of the toolkit, and assess SRILM’s impact on the research community.
|Published in||Proc. IEEE Automatic Speech Recognition and Understanding Workshop|