Information Technology Reference
In-Depth Information
MPC to solve an optimization problem is Ø((
dP
)
3
) [6]. As the order of magnitude
of
P
is usually 1, the computational burden of RMPC is much smaller than that
of MPC.
3 Convergence Analysis of RMPC
Set the weights matrix
Q
in (2) to
I
for convenience. The RMPC converges if the
cost function satisfies.
J
≤
ε
(6)
where
ε
is a sucient small real number given as required tolerance. Then (6)
can be achieved if all the steps in a predictive horizon satisfy:
J
i
=
e
(
k
+
i
)
e
(
k
+
i
)
<
ε
P
i
=1
···
P
(7)
The convergence of the P-type algorithm for LTI plant (1) has been well estab-
lished in the literature [10]. Some important analysis results are described below.
System (1) is equivalent to:
A
)
−
1
Bu
j
(
k
)+
CAx
0
y
i
(
k
)=
C
(
qI
−
(8)
where
x
j
(
k
)=
x
0
,
q
is the forward time-shift operator
qx
(
k
)
≡
x
(
k
+1).
A
)
−
1
B
,
ρ
(
A
)=
max
i
|
be the spectral radius of the
matrix
A
,and
λ
i
(
A
) be the i th eigenvalue of A ranked in descending (ascending)
order. Then system (1), (4) is convergent if
Let
H
=
C
(
qI
−
λ
i
(
A
)
|
ρ
(
I
−
LH
)
<
1
(9)
Hence, if ILC runs enough number of circles, the first MV in input sequences
can achieve (7), i.e.:
J
1
≤
ε
(10)
As RLMA is virtually the same as LMA after an elapse of
M
1 time sample, its
convergence properties are like LMA. As is known to all, the damping factor h in
LMA can be adjusted to guarantee local convergence of the algorithm. However,
LMA may not converge nicely if the initial guess is far from the solution [13].
Fortunately, the first MV derived from ILC can provide good initial value(s) for
LMA. So the local convergence of LMA can be guaranteed. As (2) is a convex
function, if the problem has a feasible solution, the global optimum is unique. So
LMA can guarantee global convergence in this problem. As RLMA is virtually
−
Search WWH ::
Custom Search