Geology Reference
In-Depth Information
duces a new design at each site according to the
following equation:
(2010b) the remaining cells are selected based on
a pure random selection but in the MCGA these
cells are selected by normal distribution about the
elite individual of the previous stage. In this case,
the MCGA compared with CGA can provide better
performance for controlling the balance between
exploration (global investigation of the search
place) and exploitation (the fine search around a
local optimum). Therefore, the MCGA increases
the probability of founding better solutions spend-
ing lower computational cost. The flowchart of the
proposed MCGA algorithm is shown in Figure 1.
k
+
1
k
k
k
+
1
k
k
s
=
θ
(
s
,
s
)
X
= −
(
1
β
)
X
+
β
X
(12)
i
i
n
i
i
i
c
The MO in the framework of the MCGA is
similar to that used in the standard GA. In the
real-valued model of mutation, the value of the
mutated design variable is replaced by a ran-
domly selected value from the R d . It has been
already demonstrated that the low values of mu-
tation probability (0.001 to 0.004) is more effec-
tive, therefore, in this chapter also the value of
0.004 is considered.
The MCGA is elitism based multi-stage evo-
lutionary algorithm. In the optimization process
by MCGA, n individuals of a randomly selected
small initial population are set on locations of a
2D grid and the search in the first process is com-
menced. As the size of the population is small,
the optimization process rapidly converges and
the best solution found in this stage, say 1 X best , is
saved. In the next stage, a new elite population is
created based on the philosophy of giving more
chance to survive the elite individuals. In this
case, 1 X best is copied to the n randomly se-
lected cells and the remaining cells are selected
as follows:
Hybrid Neural Network System
A hybrid neural network system (HNNS) is em-
ployed to efficiently and accurately predict the
nonlinear time history responses of the structures.
In this neural system, GRNN and PNN, both from
the radial basis function neural networks family,
are serially integrated.
GRNN is a memory-based network that pro-
vides estimates of continuous and discrete vari-
ables and converges to the underlying regression
surface. GRNN has a one pass learning algorithm
with highly parallel structure. It does not require an
iterative training procedure. The principal advan-
tages of GRNN are fast learning and convergence
to the optimal regression surface as the number
of samples becomes large. GRNN approximates
any arbitrary function between input and output
vectors, drawing the function estimate directly
from the training data (Specht, 1990). GRNN is
often used for function approximation. It is a two
layer feed forward network. The first layer of GR
consists of RBF neurons with Gaussian activation
functions while the output layer consists of linear
neurons. In the first layer it has as many neurons
as there are input-output vectors in training set.
Specifically, the first layer weight matrix is set to
the transpose of the matrix containing the input
vectors. The second layer also has as many neurons
as input-output vectors, but here the weight matrix
is set to the matrix containing the output vectors.
2
1
best
1
best
X
=
N X
(
,
σ
X
),
j
=
1 2
,
, ...,(
n
(13)
n
)
j
where N X
1 1 σ represents a random
number normally distributed with the mean of
1
(
best
,
X
best
)
X best and the variance of σ 1 X best . Various values
for σ are examined and the best results are obtained
by σ = 0 1. .
In this case a new optimization process is
achieved. The process of selecting elite popula-
tions and achieving optimization processes are
continued until the method converges. In fact in
the CGA proposed by Gholizadeh and Salajegheh
Search WWH ::




Custom Search