Geology Reference
In-Depth Information
a 1 corresponding to the largest element of the
RBF layer neurons and 0's elsewhere. Thus, the
network classifies the input vector into a specific
k (
important task. All of the N S selected structures
are considered in the classification phase. Natu-
rally, considering stable and instable structures
the number of classes, N C , is equal to 2. In the
training phase of the PNN, the inputs are F i and
the output is 1 for stability and 2 for instability of
the corresponding structure. Employing N S 1 stable
and N S 2 instable structures, the PNN is trained to
detect stable and instable structures during the
optimization process.
The last stage in training the HNNS is to train
a network to predict the nonlinear time history
responses of the N S 1 stable structures. For this
purpose, another GRNN is considered. This net-
work is denoted as response predictor. The inputs
and outputs of the GRNN are F i and R i of the N S 1
stable structures.
In order to evaluate the accuracy of the ap-
proximate nonlinear time history responses
against their corresponding actual ones (obtained
by conventional FE analysis), two evaluation
metrics are used.
= 1 2 class because that class has
the maximum probability of being correct. The
key advantage of PNN over the other networks
is its rapid training. Since the number of layers
in the PNN architecture is fixed and all the syn-
aptic weights are directly assigned using training
samples, this procedure can be finished in only
one epoch and no error correction procedure is
necessary. It has been proved that with enough
training data a PNN is guaranteed to converge to
a Bayesian classifier, which usually owns the
optimal classification capability (Wasserman,
1993).
The first step in training of HNNS is to select
a data set. In the sampling process, N S structures
based on their design variable vectors are selected.
The natural frequencies ( F i ) and nonlinear time
history responses ( R i ) of all the selected struc-
tures are computed by the conventional FEM. In
(Gholizadeh et. al ., 2009) it has been demonstrated
that the best candidates as the inputs of the neural
networks for predicting the time history responses
of structures are natural frequencies. In this chapter
also the natural frequencies are employed as the
inputs. During the optimization process evaluating
of the frequencies by analytic methods increases
the computational effort of the process. In order
to prevent from this, a GRNN is trained to predict
the natural frequencies. The inputs and outputs of
this GRNN, denoted as frequency predictor, are
design variables ( X i ) and natural frequencies ( F i )
of the selected structures, respectively. During the
nonlinear time history analysis of a structure, it is
probable that the structure loses its overall stabil-
ity and the analysis procedure can not converge.
Thus, before training a neural network to predict
the nonlinear responses, it is important to detect
stable and instable structures. In this case, clas-
sifier neural networks can be employed. In the
present chapter, a PNN is trained to achieve this
k
,
, ...,
N C
)
n
n
1
2
1
1
gp
gp
RRMSE
=
((
(
z
z
) ) /(
2
( ) ))
z
2
i
i
i
n
-
1
n
i
=
1
i
=
1
gp
gp
(16)
n
n
gp
gp
(
2
2
2
R
= −
(
z
z
) ) / (
(
z
z
) )
i
i
i
i
=
1
i
=
1
(17)
where zi and z i are the ith component of the
exact and approximate responses, respectively.
The mean value of exact vectors component is
expressed by z .
In the normal mode when an unseen new
sample ( X new ) is presented to the trained HNNS dur-
ing the optimization process, at first the frequency
predictor GRNN predicts its natural frequencies
( F new ). Then these frequencies are presented to
the PNN to recognize the stability or instability
of the structure. If the structure is instable, it will
be rejected else the response predictor GRNN
 
Search WWH ::




Custom Search