Information Technology Reference
In-Depth Information
corresponds to a nonlinearity with density p i (e.g., p i ð z Þ¼ c 2 = cosh ð z Þ for super-
gaussian signals) that allows switching between sub-gaussian and super-gaussian
densities
as in the Extended
InfoMax
algorithm
[ 23 ]. This minimization
is
equivalent to maximizing the following quasi b-likelihood function:
X
n
L b ð w ; l Þ¼ 1
n
l b x t ; W ; l
ð
Þ
ð 2 : 41 Þ
t 1
log ð r 0 ð x ; w ; l ÞÞ;
for b ¼ 0
where
l b ð x; w ; l Þ¼
;
and
b r 0 ð x ; w ; l Þ b b ð w Þ 1 b
1
;
for 0\b\1
b
b þ 1 R r b þ 1
b þ 1 R Q
m
ð x ; w ; l Þ dx ¼ j det ð w Þj b
p b þ 1
i
b b ð w Þ¼ 1
ð z i Þ dz
0
i ¼ 1
2.4.3 Variational Mixture of Bayesian ICAs
Bayesian inference and variational learning were introduced in the estimation of
the ICAMM parameters in [ 53 ]. Mixture of Gaussians was used as source model.
The generative model for a data vector x in this approach is shown in Fig. 2.4 .
The probability of generating a data vector x n
from a C-component mixture
model given assumptions M is:
Þ¼ X
C
p x n jM
Þ 0 p x n jM c ; c
ð
pc jM
ð
ð
Þ
ð 2 : 42 Þ
c ¼ 1
A data vector is generated by choosing one of the C components stochastically under
pc jM
Þ 0 and then drawing from p x n jM c ; c
ð Þ ; where M¼ M 0 ;M 1 ; ... ;Mf g
is the vector of component model assumptions, M c , and assumptions about the
mixture process, M 0 . The assumptions represent everything that essentially defines
the model (values of fixed parameters, model structure, details of the component
switching method, any prior information, etc.).
The probability of observing data vector x n under component cth ICA model
(x ¼ A c s c þ y c þ e c ; s c are the sources of dimension L c ; y c is an S-dimensional
bias vector, and x is S-dimensional additive noise) is given by
ð
2
exp ½ E c
Þ¼ k c
2p
p x n j h c ; c
ð
ð 2 : 43 Þ
;
T
;
where h c ¼ A c ; s c k c
E c ¼ k 2
x n A c s c y c
x n A c s c y c
and k c
is
related
with
the
variance
of
the
noise
considered
zero-mean
Gaussian
and
isotropic.
Search WWH ::




Custom Search