Do optimal entropy-constrained quantizers have a finite or infinite number of codewords?

An entropy-constrained quantizer is optimal if it minimizes

the expected distortion subject to a constraint on the output entropy. In this correspondence, we use the Lagrangian formulation to show

the existence and study the structure of optimal entropy-constrained quantizers

that achieve a point on the lower convex hull of the operational distortion-

rate function. In general,

an optimal entropy-constrained quantizer may have a countably infinite

number of codewords. Our main results show that if the tail of the source

distribution is sufficiently light (resp., heavy) with respect to the distortion

measure, the Lagrangian-optimal entropy-constrained quantizer has a finite

(resp., infinite) number of codewords. In particular, for the squared

error distortion measure, if the tail of the source distribution is lighter than

the tail of a Gaussian distribution, then the Lagrangian-optimal quantizer

has only a finite number of codewords, while if the tail is heavier than that

of the Gaussian, the Lagrangian-optimal quantizer has an infinite number

of codewords.

GyorgyLCB03.pdf
PDF file

In  IEEE Trans. Information Theory

Publisher  Institute of Electrical and Electronics Engineers, Inc.
© 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Details

TypeArticle
> Publications > Do optimal entropy-constrained quantizers have a finite or infinite number of codewords?