Information Technology Reference
In-Depth Information
Recurrent neural network . The graph contains cycles; so there are feedback
connections in network.
It is necessary to evolve neural network by modifying the weights of connections
so that they become more accurate. In other words, such weights should not be
fixed by experts. The neural network should be trained by feeding it teaching
patterns and letting it change its weights. This is learning process. There are three
types of learning methods:
Supervised learning . The network is trained by providing it with input and
matching output patterns. These patterns are known as classes.
Unsupervised learning . The output is trained to respond to clusters of pattern
within the input. There is no a priori set of categories into which the patterns
are to be classified.
Reinforcement learning . The learning machine does some action on the
environment and gets a feedback response from the environment. The learning
system grades its action rewarding or punishable based on the environmental
response and accordingly adjusts weights. Weight adjustment is continued until
there is no change in weights.
Reinforcement learning is the intermediate form between supervised learning and
unsupervised learning. We apply neural network into classifying corpus and such
supervised learning algorithm used in this chapter is back-propagation algorithm.
4.2 Back-Propagation Algorithm for Classification
The back-propagation algorithm [4] is a famous supervised learning algorithm for
classification, which is used in feed-forward neural network. It processes
iteratively data tuples in training corpus and compares the network's prediction for
each tuple to the actual class of the tuple. For each time it feeds a training tuple,
the weights are modified in order to minimize the mean squared error between
network's prediction and actual class. The modifications are made in backward
direction, from output layer through hidden layer down to hidden layer. Back-
propagation algorithm includes following steps:
-
Initializing the weights : the weights are initialized as random real number
which should be in space [0, 1]. Each bias associated to each unit is also
initialized.
-
Propagating the input value forward . Training tuple is fed to input layer.
Given the unit j , if unit j is input unit, its input value denoted I j and its output
value denoted O j are the same.
O j = I j
Otherwise if unit j is hidden unit or output unit, its input value I j is the
weighted sum of all output values of units from previous layer. The bias is
also added to this weighted sum
I
=
w
O
+
θ
j
ij
i
j
i
Search WWH ::




Custom Search