Information Technology Reference
In-Depth Information
6.5.2.1 Constructive Algorithm NetLS
The NetLS algorithm adds sequentially the hidden units until the number
of training errors is smaller than a user-defined value. They are trained using
the Minimerror algorithm. First, a linear and a spherical unit are trained with
the original training set L M , retaining the unit that learns with the smallest
number of errors. If all the examples are correctly learned, the problem is
separable and the algorithm stops. Otherwise the trained neuron becomes the
first hidden unit, h = 1. Then, h is incremented, and the algorithm proceeds
as follows:
Algorithm NetLS
x k , y h }
1. define a new training set L M,h of couples
where the new targets
are y h = +1 if the example was correctly classified, y h = 1 otherwise.
2. train two perceptrons (one linear, one spherical) with the set L M,h . Keep
the perceptron that makes the smallest number of errors as hidden neuron
h .
3. connect an output neuron to the set of h hidden units, and train it to assign
the original outputs y k to the internal representations of the corresponding
input patterns x k . If the classification of all the patterns is correct, the
algorithm stops. Otherwise, the output neuron is deleted, and the counter
of hidden units is incremented.
4. go to 1.
{
Figure 6.19 shows the algorithm schematically. At iteration t =1,two
perceptrons are trained with the original training set L M . If an error-free
solution is found, the algorithm stops. Otherwise, the training set is modified.
At t = 2, two new perceptrons are trained, and again, the perceptron that
generates the smallest number of errors is retained. An output neuron is then
connected to the hidden ones, and is trained to discriminate the classes of the
examples, based on their internal representations. If an error-free solution is
reached, the algorithm stops. Otherwise, the output neuron is eliminated and
new targets are defined for each pattern, depending on whether the output
neuron classified it correctly or incorrectly. The process is iterated until all
examples are classified correctly.
There are several variations of NetLS, which improve the computation
time of the algorithm. The interested reader can read the Ph.D. thesis of
Juan Manuel Torres Moreno and Christelle Godin referenced above, where
applications of the algorithm to different problems are described in details,
(see also [Torres Moreno et al. 1998]).
Remark 1. One advantage of the constructive algorithms is their computa-
tion time. At each stage, only one neuron is trained. The weights of previously
trained hidden neurons are unchanged.
Search WWH ::




Custom Search