This is a consolidated post containing three former posts on the entropy of uniformly quantized random variables.
This post is about entropy of discrete stochastic variables that are derived by quantizing continuous stochastic variables. A good introduction to the concept of entropy in Information Theory is in [5].
Let be a continuous stochastic variable with distribution and let such that be its uniformly quantized version with step size . We would like to find the entropy of the discrete stochastic variable , denoted by , which provides us an estimate of the average number of bits required to encode .
For the continuous stochastic variable , entropy is not generally defined. However, differential entropy, denoted by , plays the same role as entropy for discrete stochastic variables. A well known approximation relating differential entropy of a variable and the entropy of its quantized version is given as follows.
(2.1) |
Unfortunately, this approximation is only valid for small values of and breaks down as increases. For example, for the common case of , the approximation predicts the differential entropy and entropy of the quantized variable to be equal. This is generally not the case.
A common workaround is to use numerical simulations to calculate the . However, it would be satisfying to have an analytical expression notwithstanding its practical use. As we shall see below, the expression turns out to be quite cumbersome.
Let and denote the probability density and mass functions of and respectively. They are related as
(2.2) |
. The indicator function is when and otherwise, and serves to set the lower bound correctly.
Now onto calculating the negative entropy which is given (by definition) as follows. The motivation behind using negative entropy instead of entropy is to avoid (many) negative signs on the right hand side expression.
(2.3) | ||||
(2.4) |
Using the definition of the continuous distribution function for Exponential distribution, , the above equation is simplified as follows.
(2.5) |
With the functions given by:
(2.6) | ||||
(2.7) |
The exact values of , as we shall see below, are cumbersome. Therefore, we first obtain an approximate result by plugging in the the approximations for given below into equation (2.5).
(2.8) | ||||
(2.9) |
The performance of the above approximation may be verified numerically. In the graph below, we hold quantization step size constant at and vary the parameter . We observe that the approximation is acceptable, i.e. bit difference ¡ 1, for the range . For higher values of , the approximation fails.
The exact result, from the graph above, may be obtained by plugging in the exact values for into equation (2.5). The exact values may be computed starting from the following identities.
(2.10) | ||||
(2.11) |
As the derivation is lengthy, we directly present the simplified result below.
(2.12) | ||||
(2.13) |
Where . We see that even with a simple distribution function, the exact expressions quickly get out of hand and are unfortunately not short and elegant as one would have expected.
First published: 3rd Oct. 2015 on aravindhk-math.blogspot.com
Modified: 16th Dec. 2023 – Style updates for LaTeX
This post is about entropy of discrete stochastic variables that are derived by quantizing continuous stochastic variables. A good introduction to the concept of entropy in Information Theory is in [5]. This post is in continuation to the previous post in Section 2.1.1.
Let be a continuous stochastic variable with distribution and let such that and be its uniformly quantized version with step size . We would like to find the entropy of the discrete stochastic variable , denoted by , which provides us an estimate of the average number of bits required to encode .
Half-Laplace distribution refers to the distribution obtained by folding the zero-mean distribution function along the center. The new distribution is equivalent to the distribution of the stochastic variable , i.e. the absolute value of the original Laplace distributed variable.
Let and denote the probability distribution function and cumulative distribution function of the Laplace variable . These are defined as follows.
(2.14) | ||||
(2.15) |
The half-Laplace cumulative distribution function of , denoted by , is then given as follows.
(2.16) | ||||
(2.17) |
Let denote the probability mass function of the quantized variable . It is related to the probability density function as follows.
(2.18) |
Further, let such that it contains the positive quantized values. The negative entropy is given (by definition) as follows. Note that the simplification is possible due to the symmetry of Laplace distribution.
(2.19) | ||||
(2.20) |
By using the definition from equation (2.18) we have the following:
(2.21) |
With the functions given by:
(2.22) | ||||
(2.23) |
The analytical expressions for may be computed starting from the following identities.
(2.24) | ||||
(2.25) |
Where is the differential entropy. Upon simplification, we obtain the following results.
(2.26) | ||||
(2.27) |
Where .
We see that the expressions are quite similar to the ones obtained for Exponential distribution in Section 2.1.1. This is expected as Laplace and Exponential distributions are very closely related.
First published: 5th Oct. 2015 on aravindhk-math.blogspot.com
Modified: 16th Dec. 2023 – Style updates for LaTeX
This is the third post in series discussing uniform quantization of Laplacian stochastic variables and is about entropy of separately coding sign and magnitude of uniformly quantized Laplacian variables.
We begin by showing that the distribution of the magnitude of uniformly quantized Laplacian variable is the same as the distribution of uniformly quantized magnitude of the Laplacian variable which is shown in Section 2.1.2 to be equivalent to the distribution of a corresponding uniformly quantized Exponential variable.
Let be the uniformly quantized version, with step size , of a Laplacian variable with distribution and let and be the variables denoting magnitude and sign of respectively. We have and .
Let and denote the probability mass function and cumulative distribution function of respectively. These are given as follows.
(2.28) | ||||
(2.29) |
The cumulative distribution function of the discrete variable is given as follows
(2.30) |
,
where denotes the quantization point immediately succeeding . Substituting the value of from above we have
(2.31) |
,
which can be readily seen as the cumulative distribution function of a uniformly quantized Exponential variable quantized with step size . Therefore, the entropy of , denoted by , is as given in Section 2.1.1 . Note that a generic version of the above equivalence may be proven for distributions symmetric around zero.
Next, we find the entropy of the stochastic variable denoting the sign which takes values from the set . Let denote the probability mass function of the discrete variable . It is given as follows.
(2.32) |
Encoding carries no more information than what is contained in as if and only if . Therefore, we only need to encode the non-zero signs. The entropy of the reduced size alphabet may be computed using the theorem for general case given below.
Let be a finite size coding alphabet with corresponding probabilities . Now let be the probabilities that symbols from are coded. That is, if then the i-th symbol is always coded, if then it is coded 50% of the times, and the other 50% of the times it is inferred correctly at the decoder. There are no errors in the decoding process due to the reduced alphabet size. Let
(2.33) |
denote the entropy of the reduced size coding alphabet. Then may be given by the straightforward result below.
For the case above for , we set , that is, we would not encode the symbol ’0’ but infer it correctly based on the value of . We have entropy of coding using this scheme, denoted by , given as follows.
(2.34) |
Finally, using the values of entropy and from Sections 2.1.2 and 2.1.1 and from above, we have the following upon simplification.
(2.35) |
Therefore, encoding the magnitude and the reduced sign has the same entropy as encoding the uniformly quantized Laplacian variable.
First published: 29th Oct. 2015 on aravindhk-math.blogspot.com
Modified: 17th Dec. 2023 – Style updates for LaTeX