Information Technology Reference
In-Depth Information
Actually, it is not possible to find exact and well-defined solutions for this
system. The right-hand side of that equation,
ϕ ( k +1)= y ( k +1)
HBu ( k ) ,
is called the innovation at time k . It is the error of the prediction of the
new observation y ( k + 1) from the previous estimate of the state. That error
provides us with new information that can be used for estimating a posteriori
the state x ( k + 1) in a Bayesian framework.
If the system is completely observable, it can be shown that it is possible to
select a matrix gain sequence ( K k ) such that the following recursive estimate
converges:
HAx ( k )
x ( k +1)= Ax ( k )+ Bu ( k )+ K k +1 ϕ ( k +1) .
The ( K k ), are called innovation gains. This model is called the Luenberger
state observer. The innovation gain sequence is constrained by a stability con-
dition in order to avoid the divergence of the filter. For instance, if we want
to take a constant innovation gain K n order to get a steady filter, the spec-
trum of the matrix A
KHA must be embedded in the unit disc (all the
eigenvalue modules must be smaller than 1).
4.4.1.3 Variational Approach of Optimal Filtering
The computation of the innovation gain sequence is performed by minimizing
a cost function. One can take the sum over k of the
2 . However, in most
applications, one has to take into account the measurement errors as well.
Then, the cost function will be the sum over k of the following instantaneous
cost:
j ( v k +1 )= λ
v k
2 .
That least squares criterion is a balance between the model uncertainty,
which is weighted by the parameter λ , and the measurement uncertainty,
which is weighted by the parameter µ . The tuning of those two hyperparame-
ters requires some prior knowledge of the system.
Then the innovation gain is computed by solving the quadratic optimiza-
tion problem. The solution is straightforward by canceling the gradient of the
cost function
0=2( λ I + µ H T H ) v k +1
2 + µ
v k +1
y k +1
HAx ( k )
HBu ( k )
Hv ( k +1)
2 µ H T [ y ( k )
HAx ( k
1)
HBu ( k
1)] .
Therefore, the innovation gain is equal to
K k +1 =( λ I + µ H T H ) 1 µ H T = µ H T ( λ I + µ H T H ) 1 .
Note that the values of hyperparameters λ and µ may be time-dependent,
or may be in matrix form. A fine tuning of the hyperparameters is only possible
when su cient prior knowledge of the system is available. Moreover, one has
to check that the solution obeys the stability constraint. The probabilistic
interpretation of optimal filtering gives insight into those issues, which will be
further considered in the next section.
Search WWH ::




Custom Search