Digital Signal Processing Reference
In-Depth Information
i
N
i = 0 l ( x ( i ) , u ( i ))+ l N ( x ( N + 1 ))
=
min
(23)
[
u
(
0
)
u
(
1
) ···
u
(
N
)]
subject to x
(
i
+
1
)=
f d (
x
(
i
) ,
u
(
i
))
i
=
0
,
1
,···,
N
(24)
and x
(
0
)=
x 0
(25)
Notice that the solution to this problem is a discrete-time control signal,
u o
u o
u o
[
(
0
)
(
1
) ···
(
N
)]
(26)
(where the superscript “ o ” denotes optimal) that depends only on knowledge of x 0 .
This will be converted into a closed-loop (and an MPC) controller in two steps.
The first step is to create an idealized MPC controller that is easy to understand
but impossible to build. The second step is to modify this infeasible controller by a
practical, implementable MPC controller.
The idealized MPC controller assumes that y
(
i
)=
x
(
i
) , ∀
i . Then, at i
=
0,
x
x 0 is known. Solve the nonlinear programming problem instantaneously.
Apply the control u o
(
0
)=
(
0
)
on the time interval 0
t
< δ
,where
δ
is the discretization
interval. Next, at time t
= δ
, equivalently i
=
1, obtain the new value of x ,
i.e., x
x 1 . Again, instantaneously solve the nonlinear programming problem,
exactly as before except using x 1 as the initial condition. Again, apply only the first
step of the newly computed optimal control (denote it by u o
(
1
)=
(
1
)
) on the interval
δ
.
The idea is to compute, at each time instant, the open-loop optimal control for
the full time horizon of N
t
<
2
δ
1 time steps but only implement that control for the first
step. Continue to repeat this forever.
Of course, the full state is not usually available for feedback (i.e., y
+
)
and it is impossible to solve a nonlinear programming problem in zero time. The
solution to both of these problems is to use an estimate of the state. Let an optimal
(in some sense) estimate of x
(
i
) =
x
(
i
)
(
k
+
1
)
given all the data up to time k be denoted by
x
(
i
+
1
|
i
)
. For example, assuming noiseless and full state feedback, y
(
k
)=
x
(
k
)
k ,
and the dynamics of the system are given by
x
(
i
+
1
)=
f d (
x
(
i
) ,
u
(
i
))
i
=
0
,
1
,···,
N
(27)
then
x
(
k
+
1
|
k
)=
f d (
x
(
k
) ,
u
(
k
))
(28)
The implementable version of MPC simply replaces x i in the nonlinear program-
ming problem at time t
u o
u o
u o
=
i
δ
by x
(
i
|
i
1
)
and solves for
[
(
i
)
(
i
+
1
) ···
(
N
+
i
)]
.
This means that the computation of the next control value can start at time t
=
i
δ
and can take up to the time
. It can take a long time to solve a complicated
nonlinear programming problem. Because of this the application of MPC to real
(
i
+
1
) δ
Search WWH ::




Custom Search