Biomedical Engineering Reference
In-Depth Information
Appendix B
Basics of Bayesian Inference
B.1
Linear Model and Bayesian Inference
This appendix explains the basics of Bayesian inference. Here, we consider the gen-
eral problem of estimating the vector
x
, containing
N
unknowns:
x
T
,
=[
x
1
,...,
x
N
]
T
.
We assume a linear model between the observation
y
and the unknown vector
x
, such
that
from the observation (the sensor data)
y
, containing
M
elements:
y
=[
y
1
,...,
y
M
]
y
=
Hx
+
ʵ
,
(B.1)
×
where
H
is an
M
N
matrix that expresses the forward relationship between
y
and
x
, and
is an additive noise overlapped to the observation
y
.
To solve this estimation problem based on the Bayesian inference, we consider
x
a
vector randomvariable, and use the following three kinds of probability distributions.
(1)
p
ʵ
: The probability distribution on the unknown
x
. This is called the prior
probability distribution. It represents our prior knowledge on the unknown
x
.
(2)
p
(
x
)
: The conditional probability of
y
given
x
. This conditional probability is
estimates the unknown
x
as the value of
x
that maximizes
p
(
y
|
x
)
(
y
|
x
)
.
(3)
p
: The probability of
x
given observation
y
. This is called the posterior
probability. The Bayesian inference estimates the unknown parameter
x
based
on this posterior probability.
The posterior probability is obtained from the prior probability
p
(
x
|
y
)
(
x
)
and the like-
lihood
p
(
y
|
x
)
using Bayes' rule,
p
(
y
|
x
)
p
(
x
)
p
p
(
x
|
y
)
=
d
x
.
(B.2)
(
y
|
x
)
p
(
x
)
On the right-hand side of Eq. (
B.2
), the denominator is used only for the normalization
p
(
x
|
y
)
d
x
=
1, and often the denominator is not needed to estimate the posterior
(
|
)
p
x
y
. Therefore, Bayes' rule can be expressed in a simpler form such as
© Springer International Publishing Switzerland 2015