Information Technology Reference
In-Depth Information
x
(
k
+1)=
f
N
(
μ
(
k
)
,u
(
k
))
,
(1)
when
f
N
(
·
) is an unknown nonlinear function and
μ
(
k
)=[
x
(
k
)
x
(
k−
1)
··· x
(
k−
··· u
(
k − m
)]
T
. According to some assumptions and the
approximation mentioned in [4], the system (1) can be rearranged as
n
)
u
(
k −
1)
u
(
k −
2)
x
(
k
+1)=
F
N
(
μ
(
k
)) +
G
N
(
μ
(
k
))
u
(
k
)
.
(2)
In this work, let us assume that these nonlinear functions
F
N
(
k
)and
G
N
(
k
)are
all unknown. The control effort
u
(
k
) is directly determined by FREN as
T
(
k
)
u
(
k
)=
β
φ
(
k
)
,
(3)
∈ R
l
is FREN's
basis function vector where
l
denotes as the number of fuzzy rules.
∈ R
l
is an adjustable parameter vector and
when
β
(
k
)
φ
(
k
)
3
Parameters Tuning Algorithm
In this work, only on-line leaning mechanism is applied with the associate of
some designed parameters. The gradient descent method with the proposed time
varying step size is introduced to adjust these parameter
β
i
for
i
=1
,
2
, ··· ,l
.
The cost function
E
(
k
), which is needed to be minimized, can be defined as
E
(
k
)=
1
2
e
2
(
k
)
,
(4)
where
e
(
k
)=
x
d
(
k
)
−x
(
k
)
.
At time index
k
+ 1, all adjustable parameters
β
i
can
be determined by
− η
(
k
)
∂E
(
k
+1)
∂β
i
(
k
)
β
i
(
k
+1)=
β
i
(
k
)
,
(5)
when
η
(
k
) is a time varying learning rate. In this work, we introduce the deter-
mination method to obtain the possible biggest learning rate when the system
stability can be guaranteed.
Apply the chain rule through (4) and (2), we obtain
∂E
(
k
+1)
∂β
i
(
k
)
∂E
(
k
+1)
∂x
(
k
+1)
∂x
(
k
+1)
∂u
(
k
)
∂u
(
k
)
∂β
i
(
k
)
,
=
=
−
[
x
d
(
k
+1)
− x
(
k
+1)]
y
p
(
k
)
φ
i
(
k
)
.
(6)
Thus, the tuning law can be rewritten as
β
i
(
k
+1)=
β
i
(
k
)+
η
i
(
k
)
e
(
k
+1)
y
p
(
k
)
φ
i
(
k
)
,
(7)
where
y
p
(
k
) denotes
∂x
(
k
+1)
∂u
(
k
)
. Let us consider the system formulation in (2) again,
clearly, we have
y
p
(
k
)=
G
N
(
μ
(
k
))
.
(8)
Search WWH ::
Custom Search