Environmental Engineering Reference
In-Depth Information
TABLE 7.5 Comparison of classification accuracies by neural networks (ANNs, Model 48 of Table 7.4) and Gaussian Maximum Likelihood
(GML) classifier.
Classifier
Conditional Kappa Coefficients
Overall
Overall
High-density
Low-density
Exposed
Cropland/
Forest
Water
Kappa
accuracy
urban use
urban use
land
grassland
Coefficients
(%)
ANNs
0.74
0.67
0.80
0.87
0.85
1.00
0.82
84.67
GML
0.61
0.52
0.92
0.92
0.93
1.00
0.77
81.00
accuracy increased from 81.33% to 84% (Table 7.4, Nos 35-44).
Nevertheless, the impact of momentumon classification accuracy
was quite marginal, as indicated by the relatively small standard
deviation. The number of iterations had a moderate impact upon
the classification accuracy, as shown by the standard deviation.
The overall classification accuracy increased as the number of
iterations increased to 1300; after that the accuracy began to
decline (Fig. 7.3E).
increase the computational complexity and lead to a suboptimal
or unsatisfactory performance when the training sample size or
the number of input and output neurons is not large enough.
The log-sig function should be used as it can help yield much
better classification accuracies and is relatively less sensitive to
training threshold values. For better classification accuracies, a
small learning rate, large momentum, and moderate number of
iterations should be used.
7.3.5.4 MLP neural networks and
GML classifier
7.4 Training algorithm
performance
Table 7.5 summarizes the classification accuracies by the best
neural network model (Table 7.4, No. 48) and the GML classifier.
Clearly, the neural network model showed a moderate improve-
ment in the overall classification accuracy and the overall Kappa
coefficient when comparing to the outcome by the GML classifier.
When looking at the conditional Kappa coefficients for differ-
ent land use/cover types, however, the neural network model
performed much better when classifying the two spectrally com-
plex urban classes, which further confirms the robustness of the
neural network technique in dealing with non-linear, complex
phenomena. On the other hand, the GML classifier performed
better in classifying several spectrally homogenous land use/cover
classes, such as forest, cropland, or exposed land, confirming the
applicability of this popular parametric classifier.
7.4.1 Experimental design
In this focused study, we evaluated the performance of several
popular training algorithms in image classification by the MLP
networks. They include SDE, GDM, RP, CGF, CGP, CGB, SCG,
BFGS, and LM algorithms (Table 7.2). We used each algorithm
to train the MLP networks multiple times using identical train-
ing samples, and then applied each of the resultant network
models to derive land cover information from the ETM
+
image
described earlier. The training algorithms were further evaluated
according to their training efficiency, capability of convergence,
classification accuracy, and stability of the classification accuracy.
7.3.6 Summary
7.4.2 Network training and image
classification
In this focused study, we investigated the sensitivity of neural
networks to six topological and training parameters. We found
that the performance of neural networks was highly sensitive to
number of hidden layers, type of activation function, and train-
ing rate. And the three other parameters, i.e., training threshold,
momentum, and number of iterations, had a marginal impact
upon the classification accuracy. A careful neural network con-
figuration can lead to a moderate overall accuracy improvement
and a substantial improvement for the two urban classes when
comparing to the outcome by the GML classifier. These observa-
tions suggest the importance of internal parameter settings when
using neural networks for image classification.
On the other hand, several practical guidelines emerged from
this study, which can be useful when parameterizing the MLP
neural network architecture for image classification. Specifically,
a small number of hidden layers should be used when the training
sample size is moderate or the number of input and output
neuronsissmall.Usingalargenumberofhiddenlayerswill
We constructed a MLP network with seven input neurons,
twenty neurons in a single hidden layer, and ten output neurons.
The activation functions for the hidden and output neurons
were hyperbolic tangent function and logistic sigmoid function,
respectively. The input data were the seven ETM
image bands
excluding the thermal band because of its coarse spatial resolu-
tion, and the output layer consisted of 10 land use/cover classes
or sub-classes. While the land classification scheme used here
was the same as the one described earlier, the high-density urban
comprised three subclasses (i.e., open space, large roof building,
and small roof building in the city core), cropland/grassland
included two subclasses (i.e., well-vegetated grassland and less-
vegetated land), and forest consisted of two subclasses (i.e.,
coniferous/mixed forest and deciduous forest). This is why the
output layer comprised 10 neurons. For each subclass/class,
+
Search WWH ::




Custom Search