Geoscience Reference
In-Depth Information
In this chapter, AI techniques such as NN and GA were examined to optimise
the interpolation methods and the creation of DEM on the samples. At the end, the
results of the estimated heights from the intelligent techniques and the usual meth-
ods of interpolation are compared.
2 Artificial Neural Networks
Artificial Neural Networks (ANN) is moulded based either on the performance
of the human brain and its functionality or its actions can be interpreted accord-
ing to the human conduct. Investigations show that this network has the ability of
learning, reminding, forgetting, concluding, pattern-recognition, classification of
information and many other brain functions (Hertz et al. 1991 ). NN is essentially
made up of simple processing units called neurons (Foody et al. 1995 ). ANN
structures are in the form of layers, which consists of input layer, output layer
and one or more intermediate layers. Each layer contains several neurons that are
connected by a network, which has different weights. Based on how the nodes are
connected to each other, NN are divided into two groups; ANN feed forward and
feedback NN. In feed forward input, to produce the output, neurons must be used
as the pathway. A feed forward NN is known as perceptron. Perceptron ANN is
one of the most important and widely used aspects in diagnosis classification
model (Picton 2000 ). Perceptron can be single-layered or multi-layered. The dif-
ference between single-layer and multi-layer perceptron is that there are one or
more hidden layers existing between the input and the output layer. The task of
these hidden layers is to extract the non-linear relationships from the input layer
presented to them.
The two main steps that exist in the application of NN are; learning and
recall. The aim of NN learning is finding the optimal weights of neuron connec-
tions, which is achieved by the recursive method (Mokhtarzade and Valadan Zoej
2007 ). Generally, the error back propagation learning rules are used to train the
multi-layer perceptron NN. The law of error propagation is composed of two
main routes; the first route is called way-in path, where the input vector affects
the Multi-Layer Perception (MLP) network and impacts on the output layers
through the intermediate layer. The output vector of the output layer is the actual
response of the MLP network. In this way, the network parameters are fixed and
unchanged. The second route is called the come-back path. In the come-back path,
unlike the way-in path, the MLP network parameters can be changed and adjusted.
This adjustment is consistent with the error correcting rule. The error signal at
the output layer of the network is formed. Error vector is equal to the difference
between the desired response and the actual response of the network. In the come-
back path, the calculated error values are distributed in the entire network through
the network layers. In this repetitive process, the corrected parameter weights are
calculated and will be added to the previous weights and hence modified to pre-
pare for implementation in the network (Wiszniewski 1983 ). In this algorithm,
Search WWH ::




Custom Search