Information Technology Reference
In-Depth Information
where p
ð
c
j
k
Þ¼
p
ð
c
¼
1
Þ¼
k
1
;
p
ð
c
¼
2
Þ¼
k
2
;
...
;
p
ð
c
¼
C
Þ¼
k
f g:
p
ð
x
jMÞ
is
known as the evidence for model
M
and quantifies the likelihood of the observed
data under model
M
. A Bayesian solution can be obtained by integrating out the
parameters k
;
H
f g
and hidden variables s
c
;
q
f g
. A set of prior distributions is
assumed over all possible parameter values. For instance, the prior over the source
model (MoG) parameters is defined as a product of priors over p
c
;
l
c
;
b
c
;
thus
p
ð
u
Þ¼
Q
C
p
ð
p
c
Þ
p
ð
l
c
Þ
p
ð
b
c
Þ:
In addition, the following priors are defined over:
c
¼
1
ICA mixture indicator variables p
ð
c
j
k
Þ
; ICA mixture coefficients p
ð
k
Þ
; mixture
proportions p
ð
p
Þ;
mean and precision over each MoG p
ð
l
Þ
and p
ð
b
Þ
; bias vector
p
ð
y
Þ
; sensor noise precision p
ð
k
Þ
; each element of the mixing matrix p
ð
A
Þ
with
precision a
i
for each column; and relevance of each source p
ð
a
Þ
.
The optimization follows from Bayes' rule log p
ð
X
Þ¼
log
p
ð
X
;
w
Þ
p
ð
w
j
X
Þ
:
The term
w is the vector of all hidden variables and unknown parameters. This can be
written as
log p
ð
X
Þ¼
Z
p
0
ð
w
Þ
log
p
0
ð
w
Þ
p
ð
X
;
w
Þ
p
0
ð
w
Þ
p
ð
w
j
X
Þ
dw
¼
Z
p
0
ð
w
Þ
log
p
0
ð
X
;
w
Þ
p
0
ð
w
Þ
dw
þ
Z
p
0
ð
w
Þ
log
p
0
ð
w
Þ
p
0
ð
w
j
X
Þ
dw
ð
2
:
47
Þ
¼
F[w
þ
KL
½
p
0
jj
p
p
0
ð
w
Þ
p
0
ð
w
j
X
Þ
;
where
is
some
approximation
to
the
posterior
F
½
w
¼
h
log p
ð
X
;
w
Þi
p
0
ð
w
Þ
þH½
p
0
ð
w
Þ
; and KL
½
p
0
jj
p
¼
R
p
0
ð
w
Þ
log
p
0
ð
w
Þ
p
ð
w
j
X
Þ
dw
: H
p
0
ð
w
Þ
½
is
the entropy of p
0
ð
w
Þ
, and KL is the Kullback-Leibler divergence.
In the mixture model p
¼f
c
;
s
;
q
;
k
;
H
g
. By choosing p
0
ð
w
Þ
such that it fac-
torizes, terms in each hidden variable can be maximized individually. In [
53
], the
following factorization was chosen,
p
0
ð
w
Þ¼
p
0
ð
c
Þ
p
0
Þ
p
0
Þ
p
0
ð
k
Þ
p
0
ð
y
Þ
p
0
ð
k
Þ
p
0
ð
A
Þ
p
0
ð
a
Þ
p
0
ð
/
Þ
ð
s
c
j
q
c
;
c
ð
q
c
j
c
ð
2
:
48
Þ
p
0
ð
/
Þ¼
p
0
ð
p
Þ
p
0
ð
l
Þ
p
0
ð
b
Þ
and
p
0
ð
a
j
b
Þ
is
where
the
approximating
density
of
p
ð
a
j
b
;
X
Þ
.
Also
the
posteriors
over
the
sources
were
factorized
such
that
Þ
Q
.
L
c
p
0
p
0
Þ
p
0
ð
s
c
;
q
c
j
c
ð
q
c
j
c
s
c
;
i
j
q
i
;
c
i
¼
1
2.5 Conclusions
In this chapter, an overview of the current techniques in ICA and ICA mixture
modelling (ICAMM) has been carried out. These techniques establish a framework
for
non-linear
processing
of
data
with
complex
non-gaussian
distributions.
Search WWH ::
Custom Search