Biomedical Engineering Reference
In-Depth Information
basis, this is used to make up for deviations in a series of scanners and typically
drastically enhances reconstruction away from the center.
Also, the incorporation of additional information like TOF poses no prob-
lem: TOF simply changes the probability of the origin of a recorded event
on line L from homogeneous to Gaussian and can be easily incorporated into
the system matrix, in particular for list mode (see below).
Note, however, one serious drawback apart from computation time. Many
parameters in iterative algorithms (like resolution, stopping time, iteration
parameters) can hardly be chosen in an optimal way and are usually estimated,
as opposed to analytical methods with a full mathematical analysis that allows
perfect choices.
3.3.4 List mode
For three reasons, rebinning is a nasty process. First, the position of parti-
cles and thus the line that defines a decay can be measured with high precision.
In the rebinning process, this line is approximated by a line in S and the qual-
ity of measurement is not fully exploited. Second, to start the reconstruction
process, we need to complete the rebinning process first, so there is a time
delay|no just-in-time processing is possible. Third, the remarks above show
that a completely random arrangement of equations is usually better than a
structured one. By rebinning, we change the optimally random arrangement
of the list of events that comes into a structured list (which can, of course,
then be randomized again).
List mode solves these problems. List mode refers to the idea that equa-
tions are processed according to the list of events that comes directly from
the detectors, without a rebinning process. The algorithm is most easily un-
derstood by theoretically choosing S to be the set of all lines rather than a
discrete set of lines. Since decays are supposed to take place randomly with
random direction, the probability that the same line is measured twice is zero.
So g L is zero for almost all L in S, except for those where an event was actually
measured on L, in this case
g L = 1:
Looking into the EM algorithm, we find that if a vector component is zero,
the corresponding row in the matrix A t does not contribute to the result, so
in the computation of A t we can safely delete all lines in A which belong to
entries in g that vanish. We end up with a new system matrix A LM and the
algorithm
1
A t 1 A t LM
1
A LM f k :
f k+1 = f k
ยท
(3.4)
It is very important to note here that the normalization factor is still A
rather than A LM . Of course, following this idea, the normalization factor has
to be computed using the interpretation above rather than explicitly; applying
1 to the innite{dimensional matrix would make no sense.
Search WWH ::




Custom Search