Biomedical Engineering Reference
In-Depth Information
μ +
Namely, x 2 obeys the Gaussian distribution with a mean equal to A
c and a
A T .
Bayesian inference quite often uses the precision, instead of the variance. Let
us define the precision matrix
ʣ
covariance matrix equal to A
ʛ
that corresponds to the covariance natrix
ʣ
, i.e.,
ʛ = ʣ 1 . When the precision
ʛ
is used, the notation in Eq. ( C.2 ) is used such that
| μ , ʛ 1
x
N(
x
).
(C.4)
Here, since we maintain the notational convenience
N(
random variable
|
mean
,
covariance matrix
),
ʛ 1 in this notation. Using the precision matrix,
we must use
ʛ
, the explicit form
of the Gaussian distribution is given by
2 exp
1
/
2
| ʛ |
1
2 (
T
p
(
x
) =
x
μ )
ʛ (
x
μ )
.
(C.5)
N
/
(
2
ˀ)
Let us compute the entropy when the probability distribution is Gaussian. The
definition of the entropy for a continuous random variable is
p
E log p
) .
H [
p
(
x
) ]=−
(
x
)
log p
(
x
)
d x
=−
(
x
(C.6)
Substituting the probability distribution in Eq. ( C.1 ) into the equation above, we get
E log
1
H [
p
(
x
) ]=−
N
/
2
1
/
2
(
2
ˀ)
| ʣ |
2 E
1
T
ʣ 1
+
(
x
μ )
(
x
μ )
,
(C.7)
where constant terms are omitted. In the equation above, the first term on the right-
hand side is equal to 2 log
| ʣ |
. The second term is equal to
2 E
2 E tr
T
1
1
T
ʣ 1
ʣ 1
(
x
μ )
(
x
μ )
=
(
x
μ )(
x
μ )
2 tr
ʣ 1 E
T
1
=
(
x
μ )(
x
μ )
2 tr
1
N
2 .
ʣ 1
=
ʣ
=
(C.8)
Therefore, omitting constant terms, the entropy is expressed as
1
2 log
H [
p
(
x
) ]= H(
x
) =
| ʣ | .
(C.9)
 
Search WWH ::




Custom Search