Digital Signal Processing Reference
In-Depth Information
1 ( . )
w 0
2 ( . )
x ( n )
w 1
y ( n )
Σ
N neuron ( . )
w N neuron
FIGURE 7.9
Structure of an RBF neural network.
and a linear output layer. Roughly, the main difference between the MLP
and the RBF lies in the hidden layer, which is formed by neurons with a
model distinct from that shown in (7.26). Figure 7.9 depicts the RBF neural
network.
In contrast with the MLP, the RBF builds a generic nonlinear mapping
by placing nonlinear functions with a characteristic radial decreasing (or
increasing) pattern around certain positions (typically referred to as centers)
and linearly combining them. Mathematically, this is expressed by
w T
y
(
n
) =
[ x
(
n
)
]
(7.45)
(
n
)
where x
is the input vector, w is the weight vector of the linear output
layer, and
] T
[
·
]
=
[
1 ( · ) 2 ( · )
...
(7.46)
is the vector containing the so-called RBFs. Two examples of RBFs are given
in Figure 7.10 : the Gaussian function, given by
exp
2
(
u
μ
)
φ
(
u
) =
(7.47)
σ 2
and the multiquadratic function, given by
2
σ 2
(
u
μ
)
+
φ
(
u
) =
(7.48)
σ
where μ and σ the center and dispersion.
In Figure 7.11 , examples of the nonlinear mapping provided by an
RBF network are presented. It is important to remark that the approxi-
mation scheme that characterizes RBF networks also leads to a universal
 
Search WWH ::




Custom Search