Biomedical Engineering Reference
In-Depth Information
where
I
σ
represents the convolution of the image with a Gaussian kernel and
∇
I
σ
its gradient. The parameters
λ
i
σ
represent the eigenvalues of the Hessian
matrix of the image
I
σ
, ordered by increasing magnitude.
5.2.2.2
Training Set Normalization
As is classical in pattern recognition theory, the feature vectors of the training
set are normalized. We applied the normalization
f
n
−
µ
n
σ
n
f
n
m
=
(5.3)
where
f
n
,
µ
n
and
σ
n
are the value of the
n
-th component of the
m
th feature
vector, and the mean and the standard deviation of the
n
th component over the
training set, respectively [23].
5.2.2.3
Probability Density Function Estimation
The kNN rule is used to estimate the underlying PDF as follows. For a given voxel
x
, the feature vector
f
(
x
) is defined as in Eq. (5.2) and normalized as in Eq. (5.3).
Then, the
k
nearest feature vectors are found in the training set according to
the Euclidean distance. The probability for a voxel of intensity
i
to belong to a
tissue class
C
j
, is computed from the formula
x
∈
L
j
∩
N
k
(
x
)
d
(
f
(
x
)
,
f
(
x
))
x
∈
N
k
(
x
)
d
(
f
(
x
)
,
f
(
x
))
P
(
I
(
x
)
=
i
|
C
j
)
=
(5.4)
where
L
j
represents the set of points in the training set that belongs to the class
C
j
,
N
k
(
x
) is the set of the
k
nearest neighbors and
d
represents the Euclidean
distance. Figure 5.6 shows an example of the probability density functions es-
timated by the kNN rule. In the sequel,
C
0
,
C
1
and
C
2
will stand for vessel,
background and bone class, respectively.
5.2.3
Maximum A Posteriori Tissue Classification
A MAP tissue classifier is used to obtain a partition of the image domain into
regions matching with vessel, background and bone. The probabilities estimated
from the kNN rule provide a learned prior probability for a particular voxel to