Image Processing Reference
In-Depth Information
and with
e
=
[ e 1 ,
, e j ,
, e N ] T
=
y - y '
=
y - X
'
(15.11)
the residuals errors. The least-squares estimates
' of the regression values are
the ones that minimize the residual sum of squares:
β
β
>
e T e
=
( y
X
) T ( y
X
)
min
This criterion leads to the normal equations:
( X T X )
β
'
=
X T y
(15.12)
If ( X T X ) is invertible, i.e., if the design matrix is of full rank, then the least-
squares estimates are given by
β
'
=
( X T X ) −1 X T y
(15.13)
If the model is correct and the errors are normally distributed, the least-squares
estimates are also the maximum likelihood estimates and the best linear unbiased
estimates. The mean value and the variance of
β
' are respectively:
E {
β′
}
=
β
and Var{
β′
}
=
σ
2 ( S T S ) −1
(15.14)
The estimation of the regressor values allows testing multiple linear hypoth-
eses and creating different types of statistical maps. These maps are used to
assess the effects of the various conditions included in the stimulation protocol
and to draw inferences regarding the differential responses of different locations
of the brain.
15.3.1.1
Overall Effects (R 2 Maps, F Maps)
A first type of map that can be obtained within the GLM framework is a map of
the overall fit of the model to the data. This map is obtained by computing at
each voxel the squared multiple regression coefficient (R 2 ):
var
var
{}
{}
y
y
var
var
{}
{}
X
y β
R
=
=
(15.15)
2
R 2 represents the portion of variance in the measured signal y (as measured
about its mean) that is accounted for by variations in the estimated signal y' . In
voxels with R 2
1, the variance of the observed signal is well explained by the
estimated model. Conversely, in voxels with R 2
0, most of the observed variance
remains unexplained after fitting the model.
Search WWH ::




Custom Search