Databases Reference
In-Depth Information
In this swarm approach, evolving neural network is used to evolve feed
forward neural networks with a characteristic function. However, this is
not an inherent constraint. In fact, we have considered here the minimal
constraint on the type of artificial neural networks, which may be evolved.
The feed forward ANNs do not have to be strictly layered or fully connected
between adjacent layers. They may also contain hidden nodes with different
transfer functions. Let us verify how this approach is representing the
particles as well as evaluating the fitness of each particle.
3.3.2.1. Particle representation
For representing the particles we have to set the protocols such as maximum
number of hidden layers denoted as Lmax, and maximum number of nodes
for a particular hidden layer, denoted as Nmax apriori. Based on these
values the particle can be represented as given in Fig. 3.2.
The first attribute P i 1 of the particle represent the number of hidden
layers in the architecture. The value of P i 1 lies between 0 to Lmax. The
feature from P i 2 to P ( iLmax +1) tells about the number of neurons in the
respective hidden layer. The next features store the weights between input
layer and 1st hidden layer and so on except the last feature of the particle
P ib . The last feature i.e., P ib stores the weight values of bias unit. Figure 3.3
shows a clear mapping of architecture and weights to a particle.
3.3.2.2. Fitness evaluation
The fitness of each individual particle of the proposed approach is solely
determined by the miss-classification rate based on the confusion matrix.
The complete set of instructions for the proposed approach is as follows:
(1) Generate an initial swarm of N particles at random. The number of
hidden layers and the respective number of nodes generate at random
within a certain range. Uniformly distribute the set of weights inside a
small range.
Fig. 3.2.
A typical instance of a particle.
Search WWH ::




Custom Search