Biomedical Engineering Reference
In-Depth Information
Substituting Eq. ( 2.58 )into( 2.57 ), we get the cost function
2
2
F(
x
) = ʲ
y
Fx
+ ʱ
x
.
(2.59)
The cost function in Eq. ( 2.59 ) is the same as the cost function in Eq. ( 2.37 ), assuming
ʻ = ʲ/ʱ
. Thus, the solution obtained by minimizing this cost function is equal to the
solution of the L 2 -norm regularized minimum-norm method introduced in Sect. 2.8 .
To obtain the optimumestimate of x , we should compute the posterior distribution.
In this case, the posterior is known to have a Gaussian distribution because p
(
y
|
x
)
and p
are both Gaussian, and the mean and the precision matrix of this posterior
distribution is derived as in Eqs. (B.24) and (B.25). Substituting
(
x
)
ʦ = ʱ
I and
ʛ = ʲ
I
into these equations, we have
F T F
ʓ = ʱ
I
+ ʲ
,
(2.60)
F T F
I 1
+ ʱ
ʲ
F T y
x
¯
(
t
) =
(
t
).
(2.61)
The Bayesian solution which minimizes the cost function in Eq. ( 2.59 )isgivenin
Eq. ( 2.61 ). This solution is the same as Eq. ( 2.39 ). Comparison between Eqs. ( 2.61 )
and ( 2.39 ) shows that the regularization constant is equal to
, which is the inverse
of the signal-to-noise ratio of the sensor data. This is in accordancewith the arguments
in Sect. 2.8 that when the sensor data contains larger amounts of noise, a larger
regularization constant must be used.
The optimumvalues of the hyperparameters
ʱ/ʲ
can be obtained using the EM
algorithm, as described in Sect. B.5.6. The update equations for the hyperparameters
are:
ʱ
and
ʲ
1
K
ʓ 1
K
tr
1
3 N
ʱ 1
x T
=
1 ¯
(
t k ) ¯
x
(
t k ) +
,
(2.62)
k
=
1
K
ʓ 1
tr F T F
K
1
M
ʲ 1
2
=
1
y
(
t k )
F
x
¯
(
t k )
+
.
(2.63)
k =
Here, we assume that multiple K time-point data is available to determine
ʱ
and
ʲ
.
The Bayesian minimum-norm method is summarized as follows. First,
ʓ
and
¯
x
(
t k )
are computed using Eqs. ( 2.60 ) and ( 2.61 ) with initial values set to
ʱ
and
ʲ
.
Then, the values of
ʱ
and
ʲ
are updated using ( 2.62 ) and ( 2.63 ). Using the updated
ʱ
are updated using Eqs. ( 2.60 ) and ( 2.61 ). These
procedures are repeated and the resultant
and
ʲ
, the values of
ʓ
and
x
¯
(
t k )
.
The EM iteration may be stopped by monitoring the marginal likelihood, which
is obtained using Eq. (B.29) as
x
¯
(
t k )
is the optimum estimate of x
(
t k )
K
1
2 K log
1
2
y T
t k ) ʣ 1
log p
(
y
(
t 1 ),...,
y
(
t K ) | ʱ, ʲ) =−
| ʣ y |−
(
y
(
t k ),
(2.64)
y
k
=
1
 
Search WWH ::




Custom Search