Selective Use Of Multiple Entropy Models In Audio Coding

Sanjeev Mehrotra and Wei-ge Chen

Abstract

The use of multiple entropy models for Huffman or arithmetic coding

is widely used to improve the compression efficiency of many algorithms

when the source probability distribution varies.

However, the use of multiple entropy models increases the memory requirements

of both the encoder and decoder significantly.

In this paper, we present an algorithm which maintains almost all of the

compression gains of multiple entropy models for only a very small

increase in memory over one which uses a single entropy model.

This can be used for any entropy coding scheme such as Huffman or

arithmetic coding.

This is accomplished by employing multiple entropy models only for the most

probable symbols and using fewer entropy models for the less probable symbols.

We show that this algorithm reduces the audio coding bitrate by

5%-8% over an existing algorithm which uses the same amount of table memory

by allowing effective switching of the entropy model being used as

source statistics change over an audio transform block.

Details

Publication typeInproceedings
Published inProc. Workshop on Multimedia Signal Processing
PublisherIEEE
> Publications > Selective Use Of Multiple Entropy Models In Audio Coding