Information Technology Reference
In-Depth Information
5.1 Generic Issues in Closed-Loop Control of Nonlinear
Systems
5.1.1 Basic Model of Closed-Loop Control
The principle of closed-loop control or feedback control is to cancel the effects
of disturbances on the reference dynamics of the system by closing the control
loop , i.e. by establishing a functional dependence of the command signal on
the state of the system. That is achieved by implementing a control law into
a controller. A controller is a device whose input is the state of the system to
be controlled (or more generally the output of that system if its state is not
completely observed). Then the controller determines the value of the control
signal that will be used to control the original system at the next instant. Let
us consider a dynamical system defined in Chap. 4,
x ( k +1)= f [ x ( k ) , u ( k )] ,
where x ( k ) is the state vector of the model at instant k ,and u ( k )isthe
vector of control signals at instant k . The controller determines the value of
that control signal vector from the state vector according to a function ψ ,
u ( k )= ψ [ x ( k )] .
That function is called the control law.
The simpler purpose that can be assigned to a control system is to keep
the system in a desired state in spite of any disturbance (then the control is
said to reject disturbances): a servo-system is thus designed. Another possible
purpose is to keep the state trajectory of the controlled system as closed as
possible from a desired state trajectory: a tracking system is thus designed.
In those cases, which are very common in applications, the desired state is
called setpoint in servo control, and reference trajectory in tracking; naturally
enough, the control law is based on the difference between the setpoint or
reference trajectory and the actual state.
Such a closed-loop system is shown in Fig. 5.1.
When the state is not completely known, the control can only be a function
of the observations. Therefore, for such a system, the relevant equations are
the state equation, the measurement equation and the control law,
x ( k +1)= f [ x ( k ) , u ( k )]
y ( k )= g [ x ( k )]
u ( k )= ψ [ y ( k )] .
Clearly, a controlled dynamical system with its control law is actually equiv-
alent to an autonomous dynamical system. Therefore, its stability must be
investigated. If some stochastic process is added in the equations to model
Search WWH ::




Custom Search