Biomedical Engineering Reference
In-Depth Information
2.10 Bayesian Derivation of the Minimum-Norm Method
2.10.1 Prior Probability Distribution and Cost Function
In this section, we derive theminimum-normmethod based onBayesian inference. As
in Eq. ( 2.16 ), we assume that the noise
ʵ
is independently and identically distributed
Gaussian, i.e.,
, ʲ 1 I
ʵ N( ʵ |
0
),
(2.54)
2 .
Thus, using Eq. ( 2.14 ), the conditional probability distribution of the sensor data for
agiven x , p
ʲ 1
where the precision
ʲ
is used, which is the inverse of the noise variance,
= ˃
(
y
|
x
)
is
2
M / 2
exp
2
2
p
(
y
|
x
) =
y
Fx
.
(2.55)
ˀ
This conditional probability p
in the arguments
in Sect. 2.5 . Since x is a random variable in the Bayesian arguments, we use the
conditional probability p
(
y
|
x
)
is equal to the likelihood p
(
y
)
.
Let us derive a cost function for estimating x . Taking a logarithm of the Bayes's
rule in Eq. (B.3) in the Appendix, we have
(
y
|
x
)
, instead of p
(
y
)
log p
(
x
|
y
) =
log p
(
y
|
x
) +
log p
(
x
) + C,
(2.56)
where
C
represents the constant terms. Neglecting
C
, the cost function
F(
x
)
in general
form is obtained as
2
F(
) =−
(
|
) = ʲ
2log p
(
x
).
(2.57)
x
2log p
x
y
y
Fx
The first term on the right-hand side is a squared error term, which expresses how
well the solution x fits the sensor data y . The second term
is a constraint
imposed on the solution. The above equation indicates that the constraint term in the
cost function is given from the prior probability distribution in the Bayesian formu-
lation. The optimum estimate of x is obtained by minimizing the cost function
2log p
(
x
)
F(
x
)
.
2.10.2 L 2 -Regularized Method
Let us assume the following Gaussian distribution for the prior probability distribu-
tion of x ,
2
N / 2 exp
2
2
p
(
x
) =
x
.
(2.58)
ˀ
 
 
Search WWH ::




Custom Search