Hardware Reference
In-Depth Information
sufficiently regular) function. The necessary condition is a sufficiently high number
of neurons in the hidden layer.
NNs learn by example: given a training data set of inputs and a set of targets
as outputs, the network's weights and biases are adjusted in order to minimize er-
rors in its predictions on the training data. The best known training algorithm is
back-propagation: a very effective approach consists of the Levenberg-Marquardt
algorithm, as outlined in [ 2 ].
The initialization of the weights of NN for training is usually set at random small
values. However the Nguyen-Widrow initialization technique greatly reduces the
training time (see [ 11 ]).
In this topic, we use classical Feed-forward NN with one hidden layer: this layer
has a sigmoid transfer function, while the output layer has a linear transfer function.
Back-propagation training algorithm is used, in particular the Levenberg-Marquardt
algorithm is implemented, with a fixed number of iterations. The Nguyen-Widrow
initialization technique is also implemented. The training data are internally normal-
ized, both in input and in output, in order to exploit the natural range of definition of
transfer functions.
In this context, there is only one free parameter to be set: the number of neurons
in the hidden layer. Automatic network sizing has been implemented, using the value
proposed in [ 16 ].
4.4.5
Kriging
Kriging is a very popular regression methodology based on Gaussian Processes [ 14 ].
This RSM algorithm can be interpolating or approximating, depending if a noise
parameter is set to zero or to nonzero values.
Kriging is a Bayesian methodology (named after professor Daniel Krige), used as a
main tool for making previsions employed in geostatistics, e.g., for soil permeability,
oil and other minerals extraction, etc. (originally it was developed for predicting
gold concentration at extraction sites). The formalization and dissemination of this
methodology is due to Professor Georges Matheron [ 8 ], who indicated the Krige's
regression technique as krigeage .
The Kriging estimator is a linear estimator, i.e., the estimated value is expressed
as a linear combination of the training values, in other words:
N
( x )
=
λ k ( x ) y k
(4.17)
k
=
1
where the weights λ 1 , ... , λ N , are obviously point-dependent.
Kriging can also produce an estimate of the error, i.e., a prediction value and an
expected deviation from the prediction.
Search WWH ::




Custom Search