Information Technology Reference
In-Depth Information
Classical statistical signal processing relies on exploiting second-order informa-
tion. Spectral analysis and linear adaptive filtering are probably the most repre-
sentative examples. From the perspective of optimality (optimum detection and
estimation), second-order statistics are sufficient statistics when Gaussianity holds,
but lead to suboptimum solutions when dealing with general probability density
models. A natural evolution of statistical signal processing, in connection with the
progressive increase in computational power, has been exploiting higher-order
information. Thus, high-order spectral analysis and nonlinear adaptive filtering
have received the attention of many researchers in this field.
Clearly, within this framework of evolution from second-order to higher-order
information, is the transition from PCA to ICA. Briefly, PCA is a technique for
linearly transforming a vector of correlated components into a vector of variance-
ordered uncorrelated components; meanwhile ICA linearly transforms a vector of
statistically dependent components into unordered independent components. ICA
can also be considered as a natural evolution of prewhitening linear transformation
(like PCA but no variance ordering is being produced). When Gaussianity holds,
both ICA and prewhitening get equivalent transformations, and infinite solutions
may exist, as any rotation of the prewhitened vector keeps the uncorrelation among
the vector components. However, when non-gaussianity appears, ICA produces a
different transformation, which can be unique if appropriate constraints are
introduced into the design. That is the reason why ICA has become so popular as a
technique for blind source separation when at maximum one source is Gaussian.
Even more interesting is to recognize that ICA implicitly assumes a model for
multivariate pdf's. The multivariate pdf of the transformed vector will be the
product of the (one-dimensional) marginal pdf's of its components. Dealing with
one-dimensional pdf's makes different complex problems involving multivariate
pdf's tractable. This perspective suggests that ICA can be an interesting tool for
use in areas of intensive data analysis. Actually, dealing with estimates of pdf's or
defining optimality criteria involving pdf's (like entropy, mutual information,
Kullback-Leibler distances, etc.) can be considered the last generation in statis-
tical signal processing approaches: a natural evolution from second-order and
higher-order statistics to data distribution information. In this chapter, we have
reviewed some of the most representative ICA algorithms derived from entropy,
cumulant, and time structure perspectives: InfoMax, JADE, FastIca, and TDSEP.
In addition, we have reviewed the principal non-parametric ICA algorithms
(Npica, Radical, Kernel-ICA) from a research direction that pursues generalization
of the methods; and thus BSS is done with completely unknown information about
the sources.
Some authors have termed the approaches above as non-linear information
processing [ 66 ]. This is relevant since non-linear information processing estab-
lishes a bridge between statistical signal processing and computational and arti-
ficial intelligence sciences. That is why many people from signal processing are
increasingly involved in areas like data mining, machine learning, or clustering,
and many researchers from computational sciences are working on new data
intensive signal and image processing applications.
Search WWH ::




Custom Search