Information Technology Reference
In-Depth Information
Fig. 2.4 ICA mixture for
variational learning
1
2
C
ICA
1
ICA
2
ICA
C
X
C
0
The source model is MoG, which is a factorized mixture of 1-dimensional
Gaussians with L
c
factors (i.e., sources) and L
c
components per source. This model
is defined as (subscript c has been dropped for brevity),
¼
Y
X
L
c
m
i
p s
c
;
i
j
u
c
;
i
;
c
p s
c
j
u
c
;
c
pq
i
¼
q
i
j
p
i
;
c
i
¼
1
q
i
¼
1
ð
2
:
44
Þ
¼
Y
X
L
c
m
i
p
i
;
q
i
N
s
c
;
i
; l
i
;
q
i
;
b
i
;
q
i
i
¼
1
q
i
¼
1
where l
i
;
q
i
is the position of feature q
i
w.r.t. the cluster centre, b
i
;
q
i
is its size, and
p
i
;
q
i
its
''prominence''
w.r.t.
other
features.
The
mixture proportions
are the prior probabilities of choosing component q
i
of the ith
source (of the cth ICA model etc.). q
i
is a variable indicating which component
of the ith source is chosen for generating s
c
;
i
and takes on values of
q
i
¼
1
;
...
;
q
i
¼
m
f g
(where m
i
depends on ICA model c). The parameters of
source i are u
c
;
i
¼
p
c
;
i
;
l
c
;
i
;
b
c
;
i
p
i
;
q
i
¼
pq
1
¼
q
i
j
p
i
:
The complete parameter set of the source model
is u
c
¼
u
c
;
1
;
u
c
;
2
;
...
;
u
c
;
L
c
:
The complete collection of possible source states is
denoted as q
c
¼
q
c
;
1
;
q
c
;
2
;
...q
c
;
m
and runs over all m
¼
Q
im
i
possible com-
binations of source states.
It can be shown that the likelihood of the i.i.d. data X
¼
x
1
;
x
2
;
...x
N
given
the model parameters H
c
¼
A
c
;
y
c
;
k
c
;
u
c
f
g
can be written as
Z
p x
n
;
s
c
;
q
c
j
H
c
;
c
Þ
Y
X
N
m
ds
c
p X
j
H
c
;
c
ð
ð
2
:
45
Þ
n
¼
1
q
¼
1
where ds
c
¼
Q
ids
c
;
i
:
Thus the probability of generating a data vector from a
C-component mixture model can be written as
p
ð
X
jMÞ ¼
X
C
p
ð
c
j
k
Þ
p x
j
H
c
;
c
ð
Þ
ð
2
:
46
Þ
c
¼
1
Search WWH ::
Custom Search