Image Processing Reference
In-Depth Information
a
=
×
+
m
Voxel time
series vector
Design
matrix
Error
vector
Parameters
FIGURE 18.3 Diagram of GLM regression and boxcar function fitting.
entry is a constant 1, which is the same for all the input vectors and can be
neglected in SVR learning.
With the input vector in Equation 18.9, temporal modeling is incorporated
into the regression. Although m ( t ) here is a simple boxcar function, a whole
family of m ( t ) could be used in the same way that the design matrix in the GLM
is used to encode multiple experimental factors or confounds.
In order to test with access to the ground truth, we generate a 2-D time series
(spatial size 52
63) of synthetic data that imitates a single fMRI brain slice in
which four regions are activated. Three different amplitudes of activations are added
to the gray matter to simulate weak, medium, and strong activations as in real fMRI
data (see Figure 18.4a ). For simplicity and easier intuitive visualization, the acti-
vations are temporally in the form of a boxcar function, with six images during
each off or on period. Note that a more realistic and complicated reference function
formed by convolving this boxcar with a gamma function (6) can also be used. The
total number of time points is 72 (6 cycles). The generated data in Figure 18.4b is
then used as ground truth for comparisons. Simulated noisy data (see Figure 18.4c)
are obtained by adding Gaussian noise N (0, 32 2 ), to the ground truth data. The
recovered image by our SVR method ( W-model
×
1; Figure 18.4d) accurately
restores the ground truth (Figure 18.4b). The image obtained using Gaussian
smoothing of the original noisy data is shown in Figure 18.4e for comparison.
Obviously, the ST-SVR method significantly improves the quality of the noisy data.
=
18.4.3
M ULTIRESOLUTION S IGNAL A NALYSIS
With the aforementioned formulation, in order to capture the underlying relation-
ship using ST-SVR for the windowed data and accommodate the differences in
scale and training set size, the corresponding entries in the input vector are
normalized over training examples within each window. After normalization, we
Search WWH ::




Custom Search