Information Technology Reference
In-Depth Information
Appendix F
Entropy Estimation
Let us assume an i.i.d. sample X n =( x 1 , ..., x n ) drawn from some univari-
ate continuous distribution with unknown PDF f ( x ). The general problem
to be addressed is how to use X n in order to obtain an estimate of an f ( x )
functional, such as entropy, H ( X ). We are here particularly interested in
estimating the Shannon's and Rényi's quadratic entropies.
Estimators of PDF functionals can be of four types [23]: integral estimator;
plug-in estimator; splitting data estimator; cross-validation estimator.
F.1 Integral and Plug-in Estimates
The integral estimator corresponds to an idea that immediately jumps to
mind: substitute f ( x ) by an estimate f n ( x ) in the formula of the functional.
When the functional is the Shannon entropy, this amounts to computing:
f n ( x )ln f n ( x ) dx .
H S ( X )=
(F.1)
Unfortunately, in this case the computation of H S ( x ) requires numerical in-
tegration. As an alternative for such cases, one can substitute f ( x ) by f n ( x )
in the empirical expression of the functional. This corresponds to the plug-in
(or resubstitution) estimator and is usually easily implemented.
F.2 Integral Estimate of Rényi's Quadratic Entropy
The integral estimate of Rényi's quadratic entropy has a computationally
interesting form, that doesn't raise the above mentioned problem of the need
of numerical integration, when the PDF estimate
f n ( x ) is obtained by the
 
Search WWH ::




Custom Search