Digital Signal Processing Reference
In-Depth Information
4.2
Independent Component Analysis
m
called
a
mixed vector
is given, and the task is to find a transformation
f
(
x
)of
x
out of a given analysis model such that
x
is as statistically independent
as possible.
→ R
In independent component analysis, a random vector
x
:Ω
Definition
First we will define ICA in its most general sense. Later we will mainly
restrict ourselves to linear ICA.
Definition 4.1 ICA:
Let
x
:Ω
→ R
m
be a random vector. A
n
is called an
independent component
analysis (ICA)
of
x
if
y
:=
g
(
x
) is independent. The components
Y
i
m
measurable mapping
g
:
R
→ R
of
y
are said to be the
independent components (ICs)
of
x
.
We sp eak of
square ICA
if
m
=
n
. Usually,
g
is then assumed to be
invertible.
Properties
It is well-known [125] that without additional restrictions to the map-
ping
g
, ICA has too many inherent indeterminacies, meaning that there
exists a very large set of ICAs which is not easily described. For this,
Hyvarinen and Pajunen construct two fundamentally different (nonlin-
ear) decompositions of an arbitrary random vector, thus showing that
independence in this general case is too weak a condition.
Note that if
g
is an ICA of
x
,then
I
(
g
(
x
)) = 0. So if there is some
parametric way of describing all allowed maps
g
, a possible algorithm
to find ICAs is simply to minimize the mutual information with respect
to
g
:
g
0
=argmin
g
I
(
g
(
x
))
.
This is called
minimum mutual information (MMI)
. Of course, in prac-
tice the mutual information is very hard to calculate, so approximations
of
I
will have to be found. Sections 4.5, 4.6, and 4.7 will present some
classical ICA algorithms. Often, instead of minimizing the mutual in-
formation, the output entropy is maximized, which is kwown as the
principle of
maximum entropy (ME)
. This will be discussed in more de-
Search WWH ::
Custom Search