Information Technology Reference
In-Depth Information
X
I
n ¼ 1 d ð k Þ
W k 1
ð
Þ
np y ð k 1 Þ
W k 1
ip
ð
t
þ
1
Þ¼
ð
t
Þþa
ð
10
Þ
ip
ni
"
#
X
I
n ¼ 1 d ð k Þ
np W ð k Þ
d ð k Þ
np ¼ sgm np ð:Þ:
ð t Þ
ð
11
Þ
pl
where t is the iteration number and
a
is the learning rate.
4.2 Radial Basis Function Networks
A radial basis function (RBF) network, therefore, has a hidden layer of radial units,
each actually modeling a Gaussian response surface. Since these functions are
nonlinear, it is not actually necessary to have more than one hidden layer to model
any shape of function: suf
cient radial units will always be enough to model any
function. RBF networks have a number of advantages over MLPs. First, as pre-
viously stated, they can model any nonlinear function using a single hidden layer,
which removes some design-decisions about numbers of layers. Second, the simple
linear transformation in the output layer can be optimized fully using traditional
linear modeling techniques, which are fast and do not suffer from problems such as
local minima which plague MLP training techniques.
Radial basis Gaussian transfer function is considered as ( 12 ) in this study
2
u
c
Fu
ð
;
c
; r
Þ¼
exp
ð
12
Þ
r
where c is the center,
is the variance and u is the input variable. The output of the
ith neuron in the output layer at time n is
r
X
H
y i ¼
W ij F j u
ð
;
c
; r
Þ
ð
13
Þ
j ¼ 1
Training process of the radial basis function neural network runs according to
the following steps (Oh 2010 ; Wang et al. 2010 ).
Initialize all weights at random.
Calculate the output vector by Eq. ( 13 ).
Calculate the error term
“e”
of each neuron in the output layer according to ( 14 ).
e i ð n Þ¼ y i ð n Þ y i ð n Þ; ð i ¼ 1 ; 2 ; ...; L Þ
ð 14 Þ
Search WWH ::




Custom Search