Selective Use Of Multiple Entropy Models In Audio Coding

The use of multiple entropy models for Huffman or arithmetic coding

is widely used to improve the compression efficiency of many algorithms

when the source probability distribution varies.

However, the use of multiple entropy models increases the memory requirements

of both the encoder and decoder significantly.

In this paper, we present an algorithm which maintains almost all of the

compression gains of multiple entropy models for only a very small

increase in memory over one which uses a single entropy model.

This can be used for any entropy coding scheme such as Huffman or

arithmetic coding.

This is accomplished by employing multiple entropy models only for the most

probable symbols and using fewer entropy models for the less probable symbols.

We show that this algorithm reduces the audio coding bitrate by

5\%-8\% over an existing algorithm which uses the same amount of table memory

by allowing effective switching of the entropy model being used as

source statistics change over an audio transform block.

PDF file

In  Proc. Workshop on Multimedia Signal Processing

Publisher  IEEE
© 2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.


> Publications > Selective Use Of Multiple Entropy Models In Audio Coding