Information Technology Reference
In-Depth Information
This expression can be matched with the expression of the likelihood in Eq. ( 2.9 ).
If the nonlinearities g i are chosen as the cumulative distribution functions corre-
sponding to the densities p i , i.e., g i ðÞ ¼ p i ðÞ; the output entropy is equal to the
likelihood. Thus, InfoMax is equivalent to maximum likelihood estimation (see for
instance [ 5 , 36 ]).
The first implementation of InfoMax [ 32 ] employed a stochastic gradient
algorithm. Afterwards, the algorithm convergence was accelerated using natural
gradient [ 37 ]. InfoMax was extended in [ 23 ] (Extended InfoMax) for blind sep-
aration of mixed signals with sub- and super-gaussian source distributions. The
optimization procedure uses stability analysis [ 38 ] to switch between sub- and
super-gaussian regimes. The following is the algorithm learning rule
B
DB / I E g ð s Þ s T
ð 2 : 12 Þ
g i ð s i Þ¼ 2 tanh ð s i Þ is usually used as component-wise nonlinearity for super-
gaussian components and g i
ð s i Þ¼ tanh ð s i Þ s i for sub-gaussian components.
2.2.2 JADE
Joint Approximate Diagonalization of Eigen-matrices (JADE) is an algorithm that
belongs to an approach derived from the theory of higher order cumulants [ 39 ].
This approach has been called higher-order cumulant tensor because its imple-
mentation is based on tensor algebra. The idea is to represent the fourth-order
cumulant statistics of the data by a ''quadricovariance tensor'' and to compute its
''eigenmatrices'' to yield the desired components [ 40 ]. The tensor algebra enables
the manipulation of the multidimensional higher-order cumulant matrices.
It
and
can
be
shown
that
the
second
and
third
cumulants
cum x i ; x j
.
However, the fourth cumulant differs from the fourth moment of the random
variables x i ; x j ; x k ; and x l ; this is defined as
are equal to the second and third moments Ex i ; x j
and Ex i ; x j ; x k
cum x i ; x j ; x k
¼ C ijkl x i x j x k x l
cum x i x j x k x l
Ex i x j Ex k x l
Ex j x l Ex i x l
¼ Ex i x j x k x l
½
Ex i x k
½
½
Ex j x k
ð 2 : 13 Þ
¼ 0. It means that C ij ð s Þ¼
r i d ij ; C ijkl ð s Þ¼ k i d ijkl with d ij ¼ 1 for i ¼ j and d ij ¼ 0 for i j ; d ijkl ¼ 1 for i ¼
j ¼ k ¼ l and d ijkl ¼ 0 for i j k l; where r i is the variance, and k i is the
For
independent
variables cum x i ; x j ; x k ; x l
kurtosis of the source component s i (r i ¼ Es i ; K i ¼ Es i 3E 2 ½ s i )[ 5 ].
Thus, a measure of distance between the estimated and the source components
can be stated as a distance between cumulants, obtaining the contrast under the
whitening constraint
Search WWH ::




Custom Search