Geoscience Reference
In-Depth Information
parameters in cellular automata urban growth models (Li and Yeh 2001, 2002; Wu 2002; Almeida
et al. 2008; Tayyebi et al. 2011).
13.3 HOW DO COMPUTATIONAL NEURAL NETWORKS WORK?
CNNs are not mysterious devices; these modern tools are in fact just simple (usually non-linear)
adaptive information processing structures. In mathematical terms, a CNN can be defined as a
directed graph which has the following properties (Fischer 1998):
1. A state level u i is associated with each node i .
2. A real-valued weight w ij   is associated with each edge ij between two nodes i and j that
specifies the strength of this link.
3. A real-valued bias θ i is associated with each node i.
4. A (usually non-linear) transfer function ϕ i [ u i , w ij   , θ i , ( i j )] is defined for each node i
which determines the state of that node as a function of its bias, the weights on its incom-
ing links from other nodes and the different states of the j nodes that are connected to it via
these links.
There are a number of standard terms in use. The nodes are referred to as PEs or processing units.
The edges of the network are called connections. The connections function as a unidirectional con-
duction path (for signal or data flows) and transmit their information in a predetermined direction.
Each PE can have numerous incoming connections. These are called input connections and there
is no upper limit on their number. There is also no restriction on the number of output connections .
Each output connection carries an identical output signal which is the state, or activation level, of
that PE. The weights are termed connection parameters , and it is these items that are altered during
the training process and which in turn determine the overall behaviour of the CNN model.
A typical CNN architecture is shown in Figure 13.1. Circles are used to denote the PEs, which
are all linked with weighted connections, to form a network. The connections have arrows on them
which indicate the direction of the signal flow. The single output signal from each PE branches, or
fans out, and identical copies of the same signal are either distributed to other PEs or leave the net-
work altogether. The input that is presented to the network from the external world can be viewed
Input layer
Output layer
Hidden layer
Input
signals
Output
response
FIGURE 13.1 Basic configuration of a feedforward multilayered perceptron. (From Fischer, M.M. and
Abrahart, R.J., Neurocomputing-Tools for geographers, in GeoComputation , eds. Openshaw, S. and
Abrahart, R.J., pp. 192-221, Taylor & Francis, London, U.K., 2000.)
Search WWH ::




Custom Search