Digital Signal Processing Reference
In-Depth Information
Fig. 12.13 Supervised
training of a single neuron
p 1
w 1
p 2
w 2
y
p R
w R
1
b
e
t
+
numerous simulation signals and also here a margin of error exists, e.g. by defining
of the threshold values for decision making.
This training (learning) of a neural network is to be understood as iterative
adjusting of the neurons weights and bias values so that the network output
matches the desired value (in case of the so called supervised training) or grouping
of the patterns in suitable clusters is achieved for which the desired output is
assigned. The training pairs of input-output signals have to be prepared in advance
(usually by simulation of power system transients) and then used for training
purpose. Some additional patterns are used for network testing, in order to confirm
robustness of the trained ANN.
For a clearly defined and not very complex problem the neuron parameters
(weights and bias values) can be determined analytically with low effort. However,
in most cases—not excluding also the easy ones—the neuron parameters may be
adjusted (tuned) by training, either in supervised or unsupervised manner.
The concept of supervised training is illustrated in Fig. 12.13 . The neuron
weights are being changed iteratively in function of the error value e defined as a
difference between the desired neuron output t (called target value) and the neuron
actual output y. Initial weights are usually randomly assigned, in the range [-0.5,
0.5], and then updated to obtain the output consistent with the training examples.
Neuron tuning is made by small adjustments in the weights, driven by the error
value, according to the relationship:
w k ð i þ 1 Þ¼ w k ð i Þþ 2g p k ð j Þ e ð i Þ
b ð i þ 1 Þ¼ b ð i Þþ 2g e ð i Þ;
ð 12 : 5 Þ
where i is the iteration number, j—pattern number and g—learning rate (step size).
The algorithm ( 12.5 ) represents the Widrow-Hoff training procedure based on
an approximate steepest descent procedure, with the aim to minimize the average
of the sum of squared errors for all iterations
 
Search WWH ::




Custom Search