Information Technology Reference
In-Depth Information
Fig. 1 The schematic of CLD for fatigue life assessment analysis
3 NN Architectures
3.1 MLP
Figure 2 shows an MLP with one hidden layer and single output, which is the most
popular NN architecture commonly employed in the simulation with NN.
The notations presented in Fig. 2 are: p input sets, L number of elements in input
vector, s number of hidden nodes, n the summed up of weighted inputs, a the output
of activation function in the corresponding layer, w j,i 1 and b j 1 input weight and bias
(i =1toL, j =1tos), w 2 , j and b o layer weight and output bias, and y the MLP
output. Superscripts 1 and 2 represent the
first layer of hidden and the second layer
of output, respectively.
Learning in NN is achieved by adjusting the corresponding weights in response
to external environment of input sets. The weights adjustment is accomplished by a
set of learning rule by which an objective function is minimized. In what follows,
problem formulation of NN learning will be concisely presented, particularly from
the supervised learning context of the MLP. Nonetheless, the formulation can be
also extended for RBFNN.
Let (P,T) be a pair of random variables with values in P
¼ < m and T
¼ <
,
respectively. The regression of T on P is a function of P, f
:
P
!
T, giving the
mean value of T conditioned on P, E(T|P).
Let random samples O 1 ¼
of size Q can be drawn
from the distribution of (P,T) as an observation set. For Q
ð
P 1 ; T 1
Þ ; ...;
ð
P Q ; T Q
Þ
1, f Q will denote an
estimator of f based on the random samples, that is a map f Q :
,
O 1 ! f Q O 1 ;:
, is an estimate of the regression function fp
! f Q O 1 ;
xed O 1 ;
where for
p
p
ðÞ
.
Search WWH ::




Custom Search