Biomedical Engineering Reference
In-Depth Information
of particles N k emitted from a given voxel V k and detected in the system, is
a random variable. Obviously, its values are integers, and its distribution is
given by the Poisson distribution
p(N k = j) = e f k f k
j! :
The parameter f k is the average number of particles emitted and thus pro-
portional to the activity in voxel V k which we want to compute. Note that in
typical applications the number of decays measured on a single line is typically
a small integer. This makes it crucial to take the noise model into account.
Denote by a lk the probability that a particle is detected on LOR Ll, l , pro-
vided it has been emitted in voxel V k (note that in an ideal scanner with-
out scatter this denition matches the one given above, it is simply a re{
interpretation). Then, the number of events g k detected on line of response
L l is again a Poisson{distributed random variable, with mean value (Af) l .
The probability of measuring a given data vector g, provided the activity dis-
tribution is given by f and all measurements are independent, is thus given
by
P g (f) = Y
l
(Af) g l
l
e (Af) l :
(3.2)
g l !
Since we are looking at a random process, a delta source located at x
could produce a measurement on an arbitrary line of response L. However,
the probability is much higher if L passes through x. So the main idea is that
for a given g, find an f such that P g (f) is maximized or find the distribution
f that makes it most likely that g is measured. Consequently, f is denoted
the maximum{likelihood solution.
In order to compute it, we define the EM algorithm [5], [16]
1
!
A t 1 A t g
f j+1 = f j ยท
(3.3)
Af j
where ! > 0 is an iteration parameter, 1 is a vector of 1s, and all multiplica-
tions and divisions of vectors are pointwise.
We immediately notice that for each (positive) f that satisfies Af = g, f
is a fixpoint of the iteration. Also, assuming that the equations are weighted
such that A t 1 = 1, and replacing all vector products by summations, all
subtractions by divisions, we get back the ART algorithm (3.1) with A = A j .
So, EM is nothing but the multiplicative version of ART.
It was shown in [16] that, under mild conditions, the EM algorithm con-
verges toward the maximum likelihood solution. Its main drawback is that
convergence is very slow. Since we already saw the relation of ART to EM,
we can transfer Kaczmarz's main idea also to the EM algorithm: use (3.3) not
for the full system of equations, but, as for the ART algorithm, select only a
 
Search WWH ::




Custom Search