Biomedical Engineering Reference
In-Depth Information
intensity values in any region are independent and sampled from one dis-
tribution. As a consequence we can represent a region with a distribution.
2. For brain MR Images, we assume that each global region follows a Gaussian
distribution. This hypothesis has been extensively used in brain tissue seg-
mentation [25, 26]. Another reason for using this assumption is computa-
tional efficiency. Indeed, active contour evolution can be achieved solely
by computation of the region mean and variance. It is efficient and very
easy to implement.
3. The local region statistics are most similar to the global region to which the
voxel belongs. The assumption is intuitive; however, it will be degenerated
in the vicinity of the object boundary, where voxels are sampled from
two different Gaussian distribution. As a result, the region information is
not reliable, but the complementary edge-based information will reduce
segmentation errors in this situation.
With these hypotheses, each region corresponds to one Gaussian distribution.
In other words, for a voxel we use a Gaussian distribution (local region) to replace
its intensity value. For a global region, it is also represented with a Gaussian
distribution. Next, for the sake of measuring which region a voxel should belong
to, we need to measure the difference between two probability distributions. From
information theory, information divergence measures have been widely used in
image segmentation [18, 19, 10]. In this chapter we choose symmetric Kullback-
Leibler divergence, which is also named J-divergence:
D p 1 ( x )
p 2 ( x ) = 1
2
p 1 ( x ) log
dx,
p 1 ( x )
p 2 ( x ) + p 2 ( x ) log
p 2 ( x )
p 1 ( x )
(1)
The J-divergence is convex, and D ( p 1 p 2 ) 0 with D ( p 1 p 2 )=0if and only
if p 1 and p 2 have the same value everywhere. For Gaussian probability density
functions p 1 ( x ) and p 2 ( x ), the J-Divergence can be reduced to a simple form when
the base is the irrational number e :
D p 1 ( x ) p 2 ( x )
p 1 ( x )ln
dx
1
2
p 1 ( x )
p 2 ( x ) + p 2 ( x )ln
p 2 ( x )
p 1 ( x )
=
p 1 ( x ) ln
dx
µ 2 ) 2
2 σ 2
µ 1 ) 2
2 σ 1
1
2
σ σ 1 + ( x
( x
=
p 2 ( x ) ln
µ 1 ) 2
2 σ 1
µ 2 ) 2
2 σ 2
dx
+ 1
2
σ σ 2 + ( x
( x
σ 2 +( µ 2
µ 1 ) 2
σ 1 +( µ 1
µ 2 ) 2
1
2
=
+
.
(2)
4 σ 1
4 σ 2
 
Search WWH ::




Custom Search