Information Technology Reference
In-Depth Information
Fig. 1.4. A feedforward neural network with direct terms. Its output g ( x , w )de-
pends on the input vector x , whose components are 1 ,x 1 ,x 2 ,...,x n , and on the
vector of parameters w , whose components are the parameters of the network
RBF (Radial Basis Functions) and Wavelet Networks
The parameters of such networks are assigned to the nonlinear activation
function, instead of being assigned to the connections; as in MLP's, the output
is a linear combination of the outputs of the hidden RBF's. Therefore, the
output of the network (for Gaussian RBF's) is given by
w N c +1 ,i exp
,
j =1 ( x j
g ( x , w )= N c
w ij ) 2
2 w 2
i
i =1
where x is the n -vector of inputs, and w is the vector of (( n +2) N c ) parameters
[Broomhead 1988; Moody 1989]; hidden neurons are numbered from 1 to N c ,
and the output neuron is numbered N c +1.
The parameters of an RBF network fall into two classes: the parameters
of the last layer, which convey information from the N c RBF (outputs to
the output linear neuron), and the parameters of the RBF's (centers and
standard deviations for Gaussian RBF's). The connections of the first layer
(from inputs to RBF's) are all equal to 1. In such networks, the output is a
linear function of the parameters of the last layer and it is a nonlinear function
of the parameters of the Gaussians. This has an important consequence that
will be examined below.
Wavelet networks have exactly the same structure, except for the fact that
the nonlinearities of the neurons are wavelets instead of being Gaussians. The
Search WWH ::




Custom Search