Database Reference
In-Depth Information
2.4.1
Local Model Network
The basic assumption underlying the use of learning systems is that the behavior of
the system can be described in terms of the training set
N
i
1 . It is therefore
assumed that the system to be described by a model whose observable output y i ,at
the time of step i , in response to an input vector x i , is defined by:
{
x i ,
y i }
=
y i =
f
(
x i )+ ʵ i ,
i
=
1
,
2
,...,
N
(2.38)
where
ʵ i is a sample drawn from a white noise process of zero mean and variance
2 . The modeling problem is to estimate the underlying function of the model,
˃
, from observation data, having already used the existing apriori information
to structure and parameterize the model. Let
f
(
x i )
f
(
x
,
z
)
be the estimate of f
(
x
)
for
f
some values of the P -dimensional parameter vector z . The model
can be
estimated in a number of ways. A Local Model Network (LMN), is adopted to
achieve this purpose [ 41 ]. Figure 2.4 shows the network architecture. This type of
network approximates the model function f
(
x
,
z
)
(
x
,
z
)
according to:
N m
i = 1 ʻ i
f
f i (
(
x
)=
x
,
z i )
(2.39)
i = 1 ʻ i exp
N m
i = 1 ʻ i K i ( x , z i )=
N m
2
x
z i
=
(2.40)
2
i
2
˃
t
t
where x
=[
x 1 ,...,
x P ]
and z
=[
z 1 ,...,
z P ]
are the input vector and the RBF
center, respectively. In addition,
ʻ i ,
i
=
1
,...,
N m are the weight, and K
(
x
,
z
)
is a
nonlinearity of hidden nodes.
Fig. 2.4
RBF network architecture
Search WWH ::




Custom Search