Information Technology Reference
In-Depth Information
values of
, can be of course selected and employed in the
training set, thus the training set could consist of, for examples, fatigue data from
R = 0.1:
ʸ
, say
ʸ
= 0 and 90
°
. Using the selection, the training data will be
based upon two different values of R and
ʸ
=0
°
and R = 10:
ʸ
=90
°
. It becomes clear that multivariable and
multiaxial aspects of fatigue life assessment are emphasized.
Thirdly, the discrepancies observed between fatigue lives predicted by the
NN-NARX model and those of experimental data may be reduced by using dif-
ferent selection of training fatigue data. It is hoped that using different training data,
NN would give better prediction results of fatigue lives which is indicated by
improvement in the corresponding MSE values. Related to this, the improvement of
the MSE prediction values may also be produced with respect to the variation of the
hidden nodes number in a sensitivity analysis, which is however still not further
considered in the present study and left as the subject for further study.
In the following section, informative bounds of NN prediction to further describe
the discrepancies observed between fatigue lives predicted by the NN-NARX
model and those of experimental data will be presented and discussed.
ʸ
5.4 Informative Bounds of NN Prediction for Fatigue Life
Assessment of Multivariable and Multiaxial Loadings
with MLP-NARX Model
To better describe the produced discrepancies in fatigue lives, the informative
bounds of NN prediction would be also important to be examined. With such
information, the noticeable discrepancies in fatigue lives can be better described
and the obtained NN fatigue life prediction will be strongly supported by com-
prehensive information of fatigue lives and therefore further support any subsequent
product design decisions. The informative bounds of NN prediction may be
regarded as the error bars for the NN prediction results. In the present study, the
informative bounds of NN prediction have been computed and developed for the
Levenberg-Marquardt algorithm with Bayesian regularization following the work
by Nabney ( 2002 ) and MacKay ( 2004 ). The readers are also directed to (MacKay
2004 ) for further reference of Bayesian techniques.
Noting the objective function of NN incorporating Bayesian regularization in
Eq. ( 10 ) with the parameters of weight decay
ʱ
and of inverse noise variance
ʲ
, the
variance for Gaussian distribution is stated as (Nabney 2002 ):
1
b þ g
2
T
A 1
r
¼
g
ð 20 Þ
where: g and A are respectively the gradient matrix and the Hessian of the error
function. It is important to note here that the variance has contributions from both
the output noise model (1
=b
) and the posterior distribution in the weights.
Search WWH ::




Custom Search