Information Technology Reference
In-Depth Information
In this later expression, the product corresponds to a Gaussian distribution with
3
3
3
q
(
r
)
T
i
q
(
r
)
T
i
q
(
r
)
T
i
μ
i
λ
i
λ
i
λ
i
mean
(
e
k
).
It follows that step E-S can be seen as the E-step for a standard Hidden MRF
with class distributions defined by
g
S
and an external field incorporating prior
structure knowledge through
f
. As already mentioned, it can be solved using
techniques such as those described in [16].
(
e
k
)
/
(
e
k
) and precision
k
=1
k
=1
k
=1
5.2 Updating the Tissue Intensity Distribution Parameters
As mentioned in Section 4.2, we now consider that the
θ
i
's are constant over
subvolumes of a given partition of the entire volume. The MRF prior on
θ
=
{
exp(
H
Θ
(
θ
)) and (5) can be written as,
θ
c
,c
∈C}
is
p
(
θ
)
∝
p
(
θ
)
i∈V
k
=1
g
T
(
y
i
;
θ
i
)
a
ik
=
θ
(
r
+1)
=argmax
θ∈Θ
p
(
θ
)
c∈C
k
=1
i∈V
c
g
T
(
y
i
;
θ
c
)
a
ik
,
where
a
ik
=
q
(
r
)
T
i
(
e
k
)
q
(
r
)
S
i
(
e
L
+1
)+
arg max
θ∈Θ
lst.T
l
=
e
k
q
(
r
)
(
e
l
). The second term in
a
ik
is the probability that voxel
i
belongs
to one of the structures made of tissue
k
.The
a
ik
's sum to one (over
k
)and
a
ik
can be interpreted as the probability for voxel
i
to belong to the tissue class
k
when both tissue and structure segmentations information are combined. Using
the additional natural assumption that
p
(
θ
)=
K
S
i
p
(
θ
k
), it is equivalent to
p
(
θ
k
)
c∈C
i∈V
c
g
T
(
y
i
;
θ
c
)
a
ik
.
However, when
p
(
θ
k
) is chosen as a Markov field, the exact maximization (5.2)
is still intractable. We therefore replace
p
(
θ
k
) by a product form given by its
modal-field
approximation [16]. This is actually equivalent to use the ICM [17]
algorithm. Assuming a current estimation
θ
k
(
ν
)
of
θ
k
at iteration
ν
,weconsider
in turn,
k
=1
solve for each
k
=1
...K
,
θ
k
(
r
+1)
=arg max
θ
k
∈Θ
k
N
(
c
)
)
i∈V
c
θ
k
(
ν
)
k
(
ν
+1)
c
p
(
θ
c
|
g
T
(
y
i
;
θ
c
)
a
ik
,
∀
c
∈C
,
=arg max
θ
c
∈Θ
k
(10)
where
(
c
) denotes the indices of the subvolumes that are neighbors of subvol-
ume
c
and
θ
N
(
c
)
N
θ
c
,c
∈N
=
{
(
c
)
}
. At convergence, the obtained values give
the updated estimation
θ
k
(
r
+1)
.
The particular form (10) above guides the specification of the prior for
θ
.
Indeed, Bayesian analysis indicates that a natural choice for
p
(
θ
c
| θ
k
(
c
)
)has
to be among conjugate or semi-conjugate priors for the Gaussian distribution
g
T
(
y
i
;
θ
c
). We choose to consider here the latter case. In addition, we assume that
the Markovian dependence applies only to the mean parameters and consider
that
p
(
θ
c
|
N
θ
N
(
c
)
)=
p
(
μ
c
|
μ
k
N
(
c
)
)
p
(
λ
c
)
,
with
p
(
μ
c
|
μ
k
N
(
c
)
) set to a Gaussian
distribution with mean
m
c
+
c
∈N
(
c
)
η
cc
(
μ
c
−
m
c
) and precision
λ
0
k
,and
p
(
λ
c
)
c
Search WWH ::
Custom Search