Geography Reference
In-Depth Information
these recently advanced methods. It is designed to simulate human learning pro-
cesses through organization and strengthening of passage ways between input data
and output data. Because of the nonparametric structure of the NN-classifiers and
while networks are general-purpose calculating tools that can overcome the
complex non-linear problems, the use of NNs for classifying remotely sensed data
has developed quickly over the past decade Researchers have noted that NNs do
better than standard statistical classifiers such as MLC (Tso and Mather 2009 ).
NNs have been increasingly used since the 1990 s (Franklin 1995 ; Sugumaran
2001 ) in field of pattern recognition in general, and in the field of remote sensing
analysis and classification in particular. It covers: supervised classification (Foody
and Arora 1997 ); and unsupervised classification (Tso 1997 ). A broad-spectrum
introduction to neural networks was given by Bishop ( 1995 ), while a very good
presentation of applying neural network in classification and its relationship to
conventional statistical classification was provided by Schurmann ( 1996 ). An
overview in the context of remote sensing has been described by Benediktsson
et al. ( 1990 ), and Kavzoglu ( 2001 ).
The user-selected factors affecting the NN-classifier are, according to Kavzoglu
( 2001 ): (1) learning factors: the back-propagation learning algorithm needs from
the analyst to offer values of the learning rate and momentum; (2) initial weights:
these random settings to the pre-trained network affect the network implementa-
tion; (3) number of training iterations: this defines the level of generalization as
contrasting to specialization of the solution. If a network is trained using very large
number of iterations, it might not work well on the test data. Equally, if it is not
trained well enough, it will not be able to separate the classes; (4) number of
hidden layers and units: this controls the ability of the network to learn and
generalize; and (5) number of input patterns: some researchers have suggested that
accuracy is influenced by the number of training patterns.
NNs are based poorly on the data distribution assumptions of examples and on
the character of the relationship between inputs and outputs (Paola and Scho-
wengerdt 1995 ). This is an advantage that makes these algorithms smarter than
statistical classifiers, mainly in the case when the size of training data is incom-
plete and sufficient assessment of statistical parameters is hard to achieve (Tso and
Mather 2009 ). Also, different sources of data can be applied as inputs which are
then scaled to a general range (typically values between 0 and 1 like the node
output values) before training and classification. According to Paola and Scho-
wengerdt ( 1995 ), and Qiu and Jensen ( 2004 ), ANN-classifiers are strong to noise
in the training data and has the ability to generalize. They are error-tolerant and
relatively insensitive to background noise. The drawback of neural networks lies in
that they work as a ''black box'' (Qiu and Jensen 2004 ), whilst lacking the ability
to give details to further the understanding of the relationship between input and
output. Because of their indicative structure and the element of random variations
in the results (due to the randomization of the weights of the connection links
before training), functioning prediction and the interpretation of results are not
easy. Another drawback is that iterative training needs much more computation
time than parametric methods (Paola and Schowengerdt 1995 ; Landgrebe 2003 ).
Search WWH ::




Custom Search