Biomedical Engineering Reference
In-Depth Information
). Thresh-
old functions and nonlinear sigmoid functions are commonly used. The output
The output for the neuron is determined by using a mathematical function,
g
(
x
of a
y
neuron using the sigmoid function is calculated from the following simple equation:
y ¼
e x
Þ
In biosignal processing applications, the inputs to the first layer or input layer of the
ANN can be raw data, a preprocessed signal, or extracted features from a biosignal. Raw
data are generally a sample from a digitized signal. Preprocessed signals are biosignals that
have been transformed, filtered, or processed using some other method before being input
to the neural network. Features can also be extracted from biosignals and used as inputs for
the neural network. Extracted features might include thresholds; a particular, reoccurring
waveshape; or the period between waveforms.
The ANN must learn to recognize the features or patterns in an input signal, but this is
not the case initially. In order for the ANN to learn, a training process must occur in which
the user of the ANN presents the neural network with many different examples of impor-
tant input. Each example is given to the ANN many times. Over time, after the ANN has
been presented with all of the input examples several times, the ANN learns to produce
particular outputs for specific inputs.
There are a variety of types of learning paradigms for ANNs. Learning can be broadly
divided into two categories: unsupervised learning and supervised learning. In unsupervised
learning, the outputs for the given input examples are not known. The ANN must perform
a sort of self-organization. During unsupervised learning, the ANN learns to recognize com-
mon features in the input examples and produces a specific output for each different type
of input. Types of ANNs with unsupervised learning that have been used in biosignal proces-
sing include the Hopfield network and self-organizing feature maps networks.
In supervised learning, the desired output is known for the input examples. The output
that the ANN produces for a particular input or inputs is compared against the desired out-
put or output function. The desired output is known as the target. The difference between
the target and the output of the ANN is calculated mathematically for each given input
example. A common training method for supervised learning is backpropagation. The mul-
tilayered perceptron trained with backpropagation is a type of a network with supervised
learning that has been used for biosignal processing.
Backpropagation is an algorithm that attempts to minimize the error of the ANN. The
error of the ANN can be regarded as simply the difference between the output of the ANN
for an input example and the target for that same input example. Backpropagation uses a
gradient-descent method to minimize the network error. In other words, the network error
is gradually decreased down an error slope that is in some respects similar to how a ball rolls
down a hill. The name
1
1
þ
Þ
ð
11
:
60
refers to the way by which the ANN is changed to
minimize the error. Each neuron in the network is “credited” with a portion of the network
error. The relative error for each neuron is then determined, and the connection strengths
between the neurons are changed to minimize the errors. The weights, such as those that
were shown in Figure 11.40, represent the connection strengths between neurons. The calcu-
lations of the neuron errors and weight changes propagate backward through the ANN
from the output neurons to the input neurons. Backpropagation is the method of finding
the optimum weight values that produce the smallest network error.
backpropagation
Search WWH ::




Custom Search