Information Technology Reference
In-Depth Information
optimize the future plant behavior is known as prediction horizon. P denotes
the length of the prediction horizon. Meanwhile, the time interval for adjusting
inputs is named control horizon. M denotes the length of the control horizon.
The process outputs are referred.
Instant punishing in the cost function (2), a predefined output reference tra-
jectory is used here to avoid aggressive MV moves:
y r ( k + i )= α i y ( k )+( c
α i )
y ( k ))(1
(3)
α
[0 , 1]
where c is the setpoint, α is a tunable parameter.
2.2 Solving Optimization Problem in MPC
For controllers use a quadratic cost function as (2), the dynamic optimization
takes the form of a quadratic program (QP)[7]. There are a variety of methods
that are commonly used to solve QP. Most of them replace the inequality con-
straints in the QP with linear equality constraints, or replace the constrained
optimization problem with unconstrained optimization problem. Then, the re-
placing optimization problem can be solved by iterative algorithms such as Lev-
enberg Marquardt Algorithm (LMA) or Gauss-Newton algorithm (GNA).
During the operation of LMA or GNA, several passes are made through the
data to iteratively improve the optimal result[9]. However, for very fast pro-
cesses, there may not be sucient time available to complete the iteration. As
the iteration part of solving QP is the main cause of computational burden
in MPC, it is a nature idea to use a recursive method to replace the iterative
method.
2.3 Solving Optimization Problem in RMPC
RLMA [6] can solve an optimization problem effectively. However, RLMA cannot
be directly adopted to minimize the cost function in MPC. The reason is that
RLMA operates based on information in the present step and steps before. So
RLMA can only improve the result of optimization problem gradually. As a
consequence, the first control input obtained from RLMA which is chosen as
actual input in MPC often lead to a poor result.
Considering the first MV needs to make the predictive trajectory somewhat
close to the final solution and the available information is just the predict result
of the first step, we should use the limited information suciently. As a result,
the ILC [8] is adopted here to obtain the first MV. Comparing with RLMA which
uses the information of the first step only once, the ILC utilize the information
of the first step several times to obtain a better result.
As the target of the proposed algorithm is to control fast varying dynamic
systems, a P-type ILC is chosen to obtain the first MV as follows[10]:
 
Search WWH ::




Custom Search