Information Technology Reference
In-Depth Information
"
#
X
i
k
i
C
iiii
ð
s
Þ¼
E
X
i
k
i
s
i
3
ð
2
:
14
Þ
If there is no prior knowledge about the sources in this case about the kurtosis, the
contrast function is
P
i
k
i
C
iiii
ð
s
Þ:
This is equivalent to
P
ijkl
6¼
iiii
C
ijkl
ð
s
Þ
since E s
½¼
I
[
17
] (up to a constant).
The JADE algorithm [
33
] approximates the independence by minimizing a
smaller number of cross cumulants
/
JADE
¼
X
ijkl
6¼
ijkk
C
ijkl
ð
s
Þ
ð
2
:
15
Þ
The optimization procedure of JADE tries to find the rotation matrix W such that
the cumulant matrices
Q
i
of the whitened data z
¼
Vx are as diagonal as
possible. This solves
arg min
X
i
off WQ
i
W
T
ð
2
:
16
Þ
where the operator off
ð
M
Þ¼
P
i
6¼
j
M
i
;
j
is the sum of the square of the off-diagonal
elements M. This algorithm is based on the Jacobi method whose principle is that
the rotation matrix Q can be approximated by a sequence of elementary rotations
T
k
ð
/
k
Þ
each of which try to minimize the off diagonal elements of the respective
cumulant matrices. The rotation angle /
k
(Givens angles) can be calculated in
closed form because fourth-order contrasts are polynomial in the parameters [
41
].
The rotation uses a small angle h
min
, which controls the accuracy of the optimi-
zation. Thus, cumulant-based algebraic techniques avoid having to use gradient
techniques for optimization. A comprehensive review about higher-order contrast
used in ICA and comparison with gradient-based techniques is in [
42
].
2.2.3 FastIca
ICA methods have also been approached from the nongaussianity perspective.
As stated above, without nongaussianity the estimation of the independent
components is not possible. It is well-known from the central limit theorem that
the distribution of a sum of independent random variables tends toward a
Gaussian distribution, under certain conditions. The ICA estimation can be
formulated as the search for directions that are maximally non-gaussian. Each
local maximum gives one independent component [
5
]. In addition, the Gaussian
variable has the maximum differential entropy (for unbounded variables with a
common given variance). Thus, in order to find one independent component, we
have to minimize entropy, i.e., we have to maximize the nongaussianity.
Search WWH ::
Custom Search