Geography Reference
In-Depth Information
However, when the network is trained, the classification process in this way is
rapid (Pal and Pal 1993 ). Despite the high cost of training expenses (Lillesand
et al. 2008 ), neural networks have no stable rules for the network design and their
functionality is influenced by some issues (e.g., the network architecture) (Foody
and Arora 1997 ), which is dependent on the analyst.
Classification is improved by using hierarchical NN-classifiers and combining
the classification results of multiple classifiers by a compromise rule (Lee and
Ersoy 2007 ). It is established that the use of a collection of neural networks for
LULC-classification of multispectral remotely sensed data can give a significant
increase in classification accuracy (Canty 2009 ).
A successful method of classifying remotely sensed data based on different
approaches in choosing the networks of ANNs has been referred in many studies
(Bagan et al. 2008 ). E.g., Multi-Layer Perception MLP (Benediktsson et al. 1990 ;
Arora and Mathur 2001 ); ARTMAP (Carpenter et al. 1997 ; Alilat et al. 2006 );
radial basis function (Bruzzone and Fernandez-Prieto 1999 ); and the SOM-algo-
rithm with Learning Vector Quantization (LVQ) (Ito and Omatu 1999 ;Ji 2000 ).
ARTMAP-systems, particularly ART2 and fuzzy-ART, can be practical in exe-
cuting unsupervised classification on remotely sensed imagery (Tso and Mather
2009 ). An example of applying fuzzy-ARTMAP was presented by Carpenter et al.
( 1997 ), where the results are compared to those created by the MLC, nearest
neighbor and multilayer perceptron approaches. It confirms that it is faster and
more constant. The same conclusion is also confirmed by Mannan et al. ( 1998 ).
Liu et al. ( 2004 ) presented an ARTMAP-based model called ART Mixture MAP
(ART-MMAP) for approximation LULC-fractions within a pixel. Finally, in order
to obtain fine results, one might have to try a variety of ART model-based
parameters (Tso and Mather 2009 ). The most common NN-classifier in remote
sensing is the MLP (the multi-layered feed-forward network) (Tso and Mather
2009 ). Excellent reviews about experiments using MLP are presented by Paola and
Schowengerdt ( 1995 ), Atkinson and Tatnall ( 1997 ), and Kanellopoulos and Wil-
kinson ( 1997 ). MLP employs the ''generalized delta rule''. ''At the first stage of
training a back-propagation network, the training sample vectors (with known
classes/target outputs) are used as input for the network and propagated forward to
calculate the output values for each output node. The error between the real and
preferred output is calculated. In the case where each output node represents one
class, the preferred output is a high value (e.g., 0.9) for the node of the correct
class, and a low value (e.g., 0.1) for the other nodes. The second training stage
features a backward pass from the output nodes through the network, during which
the weights are changed according to the learning rate and the error signal passed
backwards to each node'' (Benediktsson et al. 1990 ). This process of inputting the
training data (Fig. 5.38 ), estimating the output error and modifying the weights of
the connection links is repeated many times (Foody 2004 ), until some condition is
satisfied, and if possible until the network has stabilized in order that the changes
in error and weight per cycle have become very small (iterative training). When
the network is trained, i.e. suitable weights are found and, all the pixel vectors are
fed into the network and classified.
Search WWH ::




Custom Search