Digital Signal Processing Reference
In-Depth Information
2. Error Signals . An error signal originates at the output neuron of the network
and propagates backward (layer by layer) through the network. We refer to
the signal as an error signal because its computation by every neuron of the
network involves an error-dependant function in one form or another.
The output neurons constitute the output layer of the network. The remaining neurons
constitute hidden layers of the network. Thus, the hidden units are not part of the
output or input of the network—hence their designation as hidden. The first hidden
layer is fed from the input layer made up of sensory units (source nodes); the resulting
outputs of the first hidden layer are in turn applied to the next hidden layer; and so on
for the rest of the network.
Each hidden or output neuron of a MLP is designed to perform two computations:
1. the computation of the function signal appearing at the output of each neuron,
which is expressed as a continuous nonlinear function of the input signal and
synaptic weights associated with that neuron;
2. the computation of an estimate of the gradient vector (i.e., the gradients of the
error surface with respect to the weights connected to the inputs of a neuron),
which is needed for the backward pass thorough the network.
The hidden neurons act as feature detectors; as such, they play a critical role in
the operation of the MLP. As the learning process progresses across the MLP, the
hidden neurons begin to gradually discover the salient features that characterize the
training data. They do so by performing a nonlinear transformation on the input
data into a new space called the feature space. In this new space, the classes of interest
in a pattern-classification task, for example, may be more easily separated from each
other than could be the case in the original input data space. Indeed, it is the formation
of this feature space through supervised learning that distinguishes the MLP from
Rosenblatt's perceptron.
As already mentioned, the BP algorithm is an error-correction learning algorithm,
which proceeds in two phases.
1. In the forward phase , the synaptic weights of the network are fixed, and the
input signal is propagated through the network, layer by layer, until it reaches
the output. Thus, in this phase, changes are confined to activation potentials
and outputs of the neurons in the network.
2. In the backward phase , an error signal is produced by comparing the output of
the network with a desired response. The resulting error signal is propagated
through the network, again layer by layer, but this time the propagation is
performed in the backward direction. In this second phase, successive adjust-
ments are made to the synaptic weights of the network. Calculation of the
adjustments of the output layer is straightforward, but it is more challenging
for the hidden layers.
 
Search WWH ::




Custom Search