Biomedical Engineering Reference
In-Depth Information
On the other hand, according to Sect. 4.10.2 ,
is expressed as
K
1
2
y k ʣ 1
=
y k ,
(4.29)
y
k
=
1
where
ʣ y is given in Eq. ( 4.25 ). Thus, substituting Eqs. ( 4.28 ) and ( 4.29 ), into ( 4.24 ),
we get
K
1
2 K log
1
2
y k ʣ 1
log p
(
y
| ʱ ) =−
| ʣ y |−
y k .
(4.30)
y
k
=
1
The above equation indicates that p
(
y
| ʱ )
is Gaussian with the mean equal to zero
and covariance matrix equal to
ʣ y .This
ʣ y is called the model data covariance.
Therefore, the estimate of
ʱ
,
ʱ
, is obtained by maximizing log p
(
y
| ʱ )
expressed
above. Alternatively, defining the cost function such that
K
1
K
y k ʣ 1
F( ʱ ) =
log
| ʣ y |+
y k ,
(4.31)
y
k
=
1
the estimate
ʱ
is obtained by minimizing this cost function.
4.4 Update Equations for
ʱ
ʱ
In this section, we derive the update equation for
. As will be shown, the updated
equation contains the parameters of the posterior distribution. Since the value of
ʱ
is needed to compute the posterior distribution, the algorithm for computing
is
a recursive algorithm, as is the case of the EM algorithm presented in Sect. B.5 in
the Appendix. That is, first setting an initial value for
ʱ
, the posterior distribution
is computed. Then, using the parameters of the posterior distribution,
ʱ
ʱ
is updated.
These procedures are repeated until a certain stopping condition is met.
Let us derive the update equation for
ʱ
byminimizing the cost function
F( ʱ )
, i.e.,
ʱ =
argmin
ʱ
F( ʱ ).
The derivative of
F( ʱ )
with respect to
ʱ
is computed,
K
∂F( ʱ )
∂ʱ j
∂ʱ j
∂ʱ j
1
K
y k ʣ 1
=
log
| ʣ y |+
y k .
(4.32)
y
k
=
1
The first term in the right-hand side is expressed using Eq. ( 4.28 )as
∂ʱ j
| ʓ |
∂ʱ j
log
| ʣ y |=
M log
ʲ
log
| ʦ |+
log
=−
∂ʱ j
∂ʱ j
log
| ʦ |+
log
| ʓ | .
(4.33)
 
Search WWH ::




Custom Search