Information Technology Reference
In-Depth Information
N
g x
()
w
\
[
Rx
(
t
)]
g
*
,
¦
i
i
i
i
i
1
where the
D values are diagonal matrices built from dilation vectors and
R , i =
1, 2, …, N , are some rotation matrices. The redundant parameters
g
*
are
introduced to deal with non-zero-mean functions, because the wavelet
\ is a
zero-mean function. The network's equivalent structure is shown in Figure 10.6.
Rao and Kumthekar (1994) worked out the structure of recurrent wavelet networks
using the equivalence between the
( )
x statement of Cybenko (1989) that, if
(.V is a continuous discriminating
function, then finite sums of the form
N
¦
f
()
x
wa x
V
(
T
b
)
i
i
i
i
1
are dense in the space of continuous functions, so that any continuous
function f ( . ) may be approximated by a weighted sum of ı (.) functions
x analogous results of wavelet theory, which state that arbitrary functions can
be written as a weighted sum of dilated and translated wavelets
N
¦
f
()
x
wD
det
1/ 2
(
x
i
)
.
\
i
i
i
i
i
1
A more transparent wavelet network representation was proposed by Chen et
al . (1999). In this network, the wavelets are used as activation functions in the
network's hidden layer, replacing the sigmoid functions, whereby the wavelet
shape and the wavelet parameters are adaptively determined to deliver the optimal
value of an energy function. In analogy with the input-output mapping of a one
hidden-layer perceptron, generally written as (see Chapter 3)
§
ª
N
º
·
¦¦ ,
yf
¨
wf
f
(
wx
T
)
¸
«
»
o
h
h
i
i
¬
¼
©
¹
i
1
Chen et al . (1999) proposed a similar wavelet neural network structure
^
`
¦¦
n
ª
m
º
yt
()
V
w
M
wx t
()
¬
¼
i
j
0
ij
ab
k
0
jk
k
x and
y are the input and the output vectors
for i = 1, 2, …, N , where
w
respectively, and
are the connecting weights between the output unit i and the
jk
Search WWH ::




Custom Search