Information Technology Reference
In-Depth Information
learning. First, only a
finite number of observation points (example pairs) are
available. This means that the available examples sometimes must be fully utilized
for the NN learning purpose to provide proper learning of the underlying process.
Hence, the practicability and feasibility of using limited examples for NN learning
to yield accurate prediction output are assured. The second is that the realization of
target at the points of observation p q ;
q
¼
1
; ...;
Q, is observed with an additive
noise e q :
e q ¼ T q f ð P q Þ
ð 3 Þ
The observations are then noisy and the target noises e q introduce a random
component in the estimation error.
3.2 RBFNN
Figure 3 describes a schematic diagram of the RBFNN with the distance function of
Euclidean distance denoted by
, which will be further explained in this
section. In Fig. 3 , the input sets are denoted by x, the target outputs are denoted by
y and the number of hidden nodes is represented by s.
As depicted in Fig. 3 , it is clear that in RBFNN, the inputs reach the hidden layer
nodes unchanged. In addition, the output estimate f realized by the RBFNN given
the training examples can be expressed as:
k
x
c i
k
X s
i¼1 / i
f
¼
ð
k
x
c i
k
Þ: w j ; i
ð 4 Þ
where: x is the vector of input sets, ci i is the ith center node in the hidden layer and
w j,i is the vector of weights from the output nodes to the center nodes,
/ i are the
radial basis functions of the center nodes, and
is the distance between the
point representing the input x and the center of the ith hidden node.
k
x
c i
k
Fig. 3 Schematic diagram of
RBFNN
x
-
c
i
ˆ
1
x
ˆ
w
2
11
x
2
w
12
ˆ
y
3
w
13
w 1
m
x
n
ˆ
x
-
c
m
i
 
Search WWH ::




Custom Search