Information Technology Reference
In-Depth Information
S , y ( r ) )] =
E
[log p ( T
|
E
[log W T ]
q ( r 1)
S
q ( r 1)
S
+
i∈V
+
j∈N ( i )
U ij ( T i ,T j ; η T )+log g T ( y i ; t θ ( r )
t T i E q ( r 1)
S
[ γ i ( S i )]
T i ) , (8)
i
where W T is a normalizing constant that does not depend on T and can be
omitted in the maximization of step E-T. The external field term leads to
L
e T l q ( r− 1)
( e l )
E q ( r 1)
S
[ γ i ( S i )] =
S i
l =1
q ( r− 1)
S i
q ( r− 1)
S i
q ( r− 1)
S i
= t (
( e l ) ,
( e l ) ,
( e l )) .
lst.T l =1
lst.T l =2
lst.T l =3
The k th ( k =1 ... 3) component of the above vector represents the probability
that voxel i belongs to a structure whose tissue class is k . The stronger this
probability the more a priori favored is tissue k . Eventually, we notice that
step E-T is equivalent to the E-step one would get when applying EM to a
standard Hidden MRF over t with Gaussian class distributions and an external
field parameter fixed to values based on the current structure segmentation.
To solve this step, then, various inference techniques for Hidden MRF's can be
applied. In this paper, we adopt Mean field like algorithms [16] used in [4] for
MRI brain scans. This class of algorithms has the advantage to turn the initial
intractable model into a model equivalent to a system of independent variables
for which the exact EM can be carried out. Following a mean field principle,
when spatial interactions are defined via Potts models, these algorithms are
based on the approximation of U ij ( t i ,t j ; η T )by U ij ( t i ,t j ; η T )= η T t t i t j where
t is a particular configuration of T which is updated at each iteration according
to a specific scheme. We refer to [16] for details on three possible schemes to
update t .
Similarly, using definitions (4) and (5),
T , y ( r ) )] =
E q ( r )
T
[log p ( S
|
E q ( r )
T
[log W S ]
+
i∈V
t S i f i +
U ij ( S i ,S j ; η S )
j∈N ( i )
T i ,S i ( r )
+ E
[log g S ( y i |
]
(9)
q ( r )
T
i
where the normalizing constant W S does not depend on S and can be omitted
in the maximization in step E-S. The last term can be further computed,
T i ,S i ( r )
i
S i ( r )
i
)] = log g S ( y i |
E
[log g S ( y i |
) ,
q ( r )
T
where
3
g T ( y i ; θ i ) q ( r )
g S ( y i |s i i )=[ g T ( y i ; t θ i e T s i ) f i ( s i )] w ( s i ) [(
T i ( e k ) ) f i ( e L +1 )] (1 −w ( s i )) .
k =1
Search WWH ::




Custom Search