Information Technology Reference
In-Depth Information
l 1
l 2
...
l n
k 1 F
(
a k 1 , l 1 )
F
(
a k 1 , l 2 )...
F
(
a k 1 , l n )
k 2 F
(
a k 2 , l 1 )
F
(
a k 2 , l 2 )...
F
(
a k 2 , l n )
O F (
) =
.
A
. k m F
(
a k m , l 1 )
F
(
a k m , l 2 )...
F
(
a k m , l n )
Hence, we can describe the neural network with the form
a 1
W 1
B 1
=
O F ((
P
)
),
a i
a i 1
W i
B i
=
O F ((
)
).
Therefore,
a M 1
a M 2
W M 1
B M 1
=
O F ((
)
)
W 1
B 1
B 2
B M 2
B M 1
=
O F ((...
O F ((
O F ((
)
)
)...
)
).
P
A more general case is the following: each layer hat its own transfer function, i.e.,
function F i is associated to the i -th layer. Therefore, theNNhas the IM-representation
a M 1
W 1
B 1
B 2
B M 2
B M 1
=
O F M 1 ((...
O F 2 ((
O F 1 ((
P
)
)
)...
)
).
Below, we will extend the results from [23], using the ideas from Sect. 5.4 .Now,
for each layer we juxtapose an IMFE
a 1 , 1 ...
a 1 , s i
F i
=
f 1 , s i ,
p 0 f 1 , 1 ...
x
where f i , j
s i . Therefore, for the j -th node
from i -th layer of the multilayered network we juxtapose the function f i , j and in a
result, we obtain
F
for 1
i
M
1 and 1
j
a 1
W 1
B 1
=
F 1 ((
P
)
),
a i
a i 1
W i
B i
=
F i ((
)
).
Therefore,
a M 1
a M 2
W M 1
B M 1
=
F M 1 ((
)
)
W 1
B 1
B 2
B M 2
B M 1
=
F M 1 ((...
F 2 ((
F 1 ((
P
)
)
)...
)
).
 
Search WWH ::




Custom Search