Environmental Engineering Reference
In-Depth Information
Fig. 2 Illustration of the two
assumptions (adapted from Pu
et al. 2008 )
X
Let X ¼ f w 1 ; w 2 ; ... ; w k g be the set of class labels, l j the
mean, and R j the variance-covariance matrix of the jth class
of X. Assuming that each land-cover class can be modeled
by a multivariate normal distribution, it is possible to
compute the squared Mahalanobis distance, for a given
training unit t assigned to the jth class of X, by Eq. ( 1 ):
1
n j 1
R j ¼
ð t l j Þð t l j Þ T
ð 3 Þ
t 2 w j
where n j is the number of training units in the jth class of X.
Supervised Image Classification
d j
ðÞ ¼ ð t l j Þ T R 1
ð t l j Þ :
ð 1 Þ
j
Under these assumptions, the d j is modeled by a chi-
square random variable with k degrees of freedom, v k , when
the number of observations in each land-cover class is
greater than 30 (Johnson and Wicher 1998 ). Thus, we can
develop the following test to identify anomalous training
units (Johnson and Wicher 1998 ). For every class w and for
every training unit t of w,ifd j ðÞ is greater than v k ð a Þ ,
where a is the significance level of the test, we reject the
hypothesis that t is a standard observation in class w;
otherwise, t is accepted and kept in the training sample. We
have fixed the significance level at 2.5 %.
In practical applications, the class mean and variance-
covariance matrix are not a priori known. Thus, we need to
estimate them. To that end, we have estimated the class
mean and variance-covariance using their standard maxi-
mum likelihood estimators, given by the following equa-
tions (Johnson and Wicher 1998 ):
In recent years, many advanced classification approaches
have been widely applied for image classification. Many
factors, such as different sources of data, classification
system, availability of classification software, and spatial
resolution of the remotely sensed data, must be taken into
account when selecting a classification method for use.
Different classification methods have their own merits, and
for the classification, we resort to the linear discriminant
classifier (LDC). The LDC is a parametric classifier based
on the homoskedasticity assumption, i.e., we assume that
each land-cover class is modeled by a multivariate normal
distribution and each of these distributions has an equal
variance-covariance matrix. The LDC has many advantages
over more sophisticated classification algorithms, due to the
fact that it does not need as many training units comparing
to the maximum likelihood classifier (MLC) or support
vector machines (Hastie et al. 2009 ). It is simple in com-
putational and operational terms and is reasonably robust
(Kuncheva 2004 ), in that the results are good even when the
classes do not have normal distributions.
X
l j ¼ 1
n j
t
ð 2 Þ
t 2 w j
Search WWH ::




Custom Search