Information Technology Reference
In-Depth Information
Function. Hessian( Φ , G , a β , b β )
Input : mixing feature matrix Φ , mixing matrix G , mixing weight prior
parameters a β , b β
Output :( KD V ) × ( KD V ) Hessian matrix H
get D V ,K from shape of V
1
H empty ( KD V ) × ( KD V )matrix
2
for k =1 to K do
3
g k ← k th column of G
4
for j =1 to k − 1 do
5
g j ← j th column of G
6
H kj ←− Φ T ( Φ ( g k g j ))
7
kj th D V
× D V block of H H kj
8
jk th D V
× D V block of H H kj
9
a β k ,b β k pick from a β , b β
10
g k ))) + a β k
H kk Φ T ( Φ
( g k
(1
b β k I
11
k th D V
× D V block along diagonal of H H kk
12
return H
13
Function. TrainMixPriors( V , Λ V )
Input : mixing weight matrix V , mixing weight covariance matrix Λ 1
V
Output : mixing weight vector prior parameters a β , b β
get D V ,K from shape of V
1
for k =1 to K do
2
v ← k th column of V
3
( Λ 1
V
D V block along diagonal of Λ 1
V
) kk
k th D V
×
4
a β k ← a β + D 2
5
b β k ← b β + 2 Tr ( Λ V ) kk + v k v k
6
a β , b β ←{a β 1 ,...,a β K }, {b β 1 ,...,b β K }
7
return a β , b β
8
The posterior parameters of the prior on the mixing weights are evaluated
according to (7.56), (7.57), and (7.70) in order to get q β ( β k ) for all k . Function
TrainMixPriors takes the parameters of q V ( V ) and returns the parameters for
all q β ( β k ). The posterior parameters are computed by iterating over all k ,and
in Lines 5 and 6 by performing a straightforward evaluation of (7.56) and (7.57),
where in the latter, (7.70) replaces
E V ( v k v k ).
8.1.4
The Variational Bound
The variational bound
( q ) is evaluated in Function VarBound according to (7.96).
The function takes the model structure, the data, and the trained classifier and
mixing model parameters, and returns the value for
L
L
( q ). The classifier-specific
 
Search WWH ::




Custom Search