Biomedical Engineering Reference
In-Depth Information
The infinite-dimensional Bayesian inference problem in Eq. (2) is therefore
reduced to a finite-dimensional one where the conditional probability,
P ( α
,h,θ
|
I ) ∝P ( I
| α
,h,θ ) P ( α
,h,θ ) ,
(4)
is optimized with respect to the shape parameters
, and the transformation pa-
rameters h and θ . In the following, we will assume a uniform prior on these
transformation parameters, i.e.,
α
,h,θ )= P ( α ). In the next section we will
discuss three solutions to model this shape prior.
P ( α
3. EFFICIENT NONPARAMETRIC STATISTICAL SHAPE MODEL
Given a set of aligned training shapes
{
φ i } i =1 ...N , we can represent each of
them by their corresponding shape vector
{ α i } i =1 ...N . In this notation, the goal of
statistical shape learning is to infer a statistical distribution
P ( α ) from these sample
shapes. Two solutions that have been proposed are based on the assumptions
that the training shapes can be approximated by a uniform distribution [7, 9]:
P ( α )=const., or by a Gaussian distribution [6]:
P ( α ) exp α Σ 1 α , whereΣ= N
i
α i α i .
(5)
In the present chapter we propose to make use of nonparametric density esti-
mation [12] to approximate the shape distribution within the linear subspace. We
model the shape distribution by the kernel density estimate :
K α α i
σ
, where K ( u )=
.
N
u 2
2
n
1
(2 π )
P ( α )=
n/ 2 exp
i =1
(6)
There exist various methods to automatically estimate appropriate values for the
width σ of the kernel function, ranging from the k th nearest neighbor estimates to
cross-validation and bootstrapping. In this work, we simply set σ to be the average
nearest neighbor distance: σ 2 = N i =1 min j = i | α i α j | 2 .
In the context of level set-based image segmentation, the kernel density esti-
mator (6) has two advantages over the uniform and Gaussian distributions:
The assumptions of uniform distribution or Gaussian distribution are gen-
erally not fulfilled. In Figure 3, we demonstrate this for a set of silhouettes
of sample shapes. The kernel density estimator, on the other hand, is
known to approximate arbitrary distributions. Under mild assumptions,
it was shown to converge to the true distribution in the limit of infinite
sample size [13].
 
Search WWH ::




Custom Search