Geoscience Reference
In-Depth Information
The point of best generalisation is determined from the trade-off between bias and variance associ-
ated with network output and is said to occur when the combination of bias and variance is mini-
mised. In the case of a feedforward CNN, it is possible to reduce both bias and variance in a
simultaneous manner - using a sequence of ever larger data sets, in association with a set of mod-
els that have ever greater complexities, to improve the generalisation performance of the neural
network solution. The generalisation performance that might be achieved is however still limited
according to the intrinsic noise of the data.
13.7 CLASSIFICATION OF COMPUTATIONAL NEURAL NETWORKS
A taxonomic classification of four important families of CNN models (backpropagation networks,
radial basis function [RBF] networks, supervised and unsupervised ART networks and self-orga-
nising feature maps) is presented in Figure 13.4. These particular types of CNN would appear to
be the most attractive tools for solving real-world spatial analysis and geographical data modelling
tasks. The classification has two levels: the first division is between networks with and without
directed cycles and the second division is between networks that are trained with and without super-
vision (see Fischer 1998).
13.7.1 B ackProPagation cnn
Backpropagation CNNs have emerged as major workhorses in various areas of business and com-
merce and are the most common type of neural network that has been used in GeoComputation.
These tools can be used as universal function approximators for tasks such as spatial regression,
spatial interaction modelling, spatial site selection, pattern classification in data-rich environments
or space-time series analysis and prediction (Fischer and Gopal 1994; Fischer et al. 1994; Leung
1997; Openshaw 1998; Fischer and Reggiani 2004). In strict terms, however, backpropagation is a
technique that provides an efficient computational procedure for evaluating the derivatives of the
network's performance function with respect to given network parameters and corresponds to a
propagation of errors backward through the network (hence the name). This technique was first
popularised by Rumelhart et al . (1986) and has since been used in countless applications. A brief
introduction to some basic mathematics associated with the backpropagation training algorithm can
be found in Clothiaux and Batchmann (1994).
In most cases, backpropagation training is used with multilayered feedforward networks (also
termed multilayered perceptrons) so it has become convenient to refer to this type of supervised
Classification
dimension
Computational neural networks for real world geographical
data analysis and modelling
Network
typology
Feedforward
Feedback
Training
Supervised
Unsupervised
Supervised
Unsupervised
No examples
Self-organizing map
ART-1 and ART-2
Backpropagation
radial basis function
Fuzzy ARTMAP
Examples
FIGURE 13.4 A simple fourfold taxonomic classification of CNNs for geographical data analysis and
modelling. (From Fischer, M.M., Environ. Plann. A , 30(10), 1873, 1998.)
Search WWH ::




Custom Search