Geoscience Reference
InDepth Information
variability, and less largescale variability, which is also of longer range than that of forest and
shrub. For further details regarding the interpretation of variogram and covariance functions com
puted from remotely sensed imagery, see Woodcock et al. (1988).
11.3.1
Spectral and Spatial Classifications
Using the classconditional means
and (co)variances
o
o
o
, three
mm m
XX X
,
,
SS S S
XX X
,
,

1

2

3

1

2

3
Gaussian likelihood functions were established for any vector
of reflectance values at any
x
()
pixel
not in the training set (Equation 11.1). The three Gaussian likelihood functions were
subsequently inverted (Equation 11.2) to compute the three spectrally derived preposterior proba
bilities, , , and , for forest, shrub, and rangeland,
respectively. These GML preposterior probabilities are shown in Figure 11.2ac. Note (1) the high
degree of noise in the probabilities, (2) the confusion of shrub and rangeland (probabilities close
to 0.5), and (3) the motionlike appearance that entails diffuse class boundaries. The corresponding
MAP selection at each pixel
u
pc
1
[( ) ( )]
uxu
pc
2
[( ) ( )]
uxu
pc
3
[( ) ( )]
uxu
is shown in Figure 11.2d. Note again the high degree of fragmentation
in the classified map. The overall classification accuracy (evaluated against the reference classifi
cation) was 0.73 (Kappa = 0.44), indicating a rather severe misclassification.
Arguably, in the presence of noise, the original spectral vector could have been replaced by a
vector of the same dimensions whose entries are averages of reflectance values within a (typically
3
u
3) neighborhood around each pixel (Switzer, 1980). This, however, amounts to implicitly
introducing contextual information into the classification procedure: spatial variability in the reflec
tance values is suppressed via a form of lowpass filter to introduce more spatial correlation, and
thus produce less fragmented classification maps. In the absence of noisefree data, any such filtering
procedure is rather arbitrary: there is no reason to use a 3
¥
5 filter, for example. In
this chapter, we propose a method for introducing that notion of compactness in classification via
a model of spatial correlation inferred from the training pixels themselves.
Ordinary indicator kriging (OIK) (Equation 11.5 and Equation 11.6) was performed using the
three sets of
¥
3 vs. a 5
¥
training class indicators and their corresponding indicator covariance models to
compute the spacederived preposterior probabilities , , for
forest, shrub, and rangeland, respectively. These OIK preposterior probabilities are shown in Figure
11.3ac. Note the very smooth spatial patterns and the absence of clear boundaries, as opposed
to those found in the spectrally derived posterior probabilities of Figure 11.2. Note also that the
training sample class labels are reproduced at the training locations, per the dataexactitude
property of OIK. The corresponding MAP selection at each pixel
G
pc
1
[( )
uc
]
pc
2
[( )
uc
]
pc
3
[( )
uc
]
g
g
g
is shown in Figure 11.3d. The
overall classification accuracy is 0.73 (Kappa = 0.44), the same as that computed from the
spectrally derived classification, indicating the same level of severe misclassification for the
spacially derived classification.
u
11.3.2
Merging Spectral and Contextual Information
Bayesian fusion (Equation 11.9), was performed to combine the individually derived spectral
and spatial preposterior probabilities into posterior probabilities ,
, and , for forest, shrub, and rangeland, respectively; these
posterior probabilities account for both information sources and are shown in Figure 11.4ac.
Compared to the spectrally derived preposterior probabilities of Figure 11.2, the latter posterior
probabilities have smoother spatial patterns and much less noise. Compared to the spacially derived
preposterior probabilities of Figure 11.3, the latter posterior probabilities have more variable
patterns and indicate clearer boundaries. The corresponding MAP selection at each pixel
pc
1
[( ) ( ),
uuc
x
]
g
pc
2
[( ) ( ),
uuc
x
]
pc
3
[( ) ( ),
uuc
x
]
g
g
is shown
in Figure 11.4d. The overall classification accuracy increased to 0.80 and the Kappa coefficient to
u