Information Technology Reference
In-Depth Information
Inputs
Hidden layer
Output layer
n 1 1
a 1 1
w 1 1,1
p 1
w 2 1,
b 1 1
1
n 1 2
a 1 2
p 2
a o = n o = t
n o
p 3
b 1 2
.
.
.
.
.
.
b o
.
.
.
1
1
w 2 1, S 1
n 1 S 1
a 1 S 1
p R
w 1 S 1 ,
b 1 S 1
1
Fig. 2 MLP with one hidden layer and single output
Furthermore, for convenience P and T will be referred to as the sets of input and
variable output, respectively. Given the observation set O, learning in NN for
realization of the estimate f means adjusting to vector of parameters weight w and
biases b using a set of learning rule or learning algorithm in such a way that f
minimizes the objective function or empirical error de
ned as:
X
Q
2
t q f
E
ð w Þ ¼
p q ; w
ð 1 Þ
q¼1
and generalizes well or outputs properly when a novel input vector p test never seen
before is fed into the network.
The estimate f realized by the MLP shown in Fig. 2 given the training set O can
be written as:
X
s
i¼1 w
f
2
1
ð
p ; w
Þ ¼
1 ; i s w
i ; j p þ b i
þ b o
ð 2 Þ
where
(.) is a sigmoidal function used in the nodes of hidden layer.
In addition, by keeping in mind that learning in NN is principally updating the
network weights based on the given set of examples so that the network will give
proper response to new examples, below are two limiting factors of the NN
τ
Search WWH ::




Custom Search