Environmental Engineering Reference
In-Depth Information
h
i 1 h T P k j k 1
ð 9 : 4
time. The state space matrices for this nominal
model will be represented by F 0 and G 0 .
As shown, the n
k
h T P k j k 1 h
P k ¼
P k j k 1
P k j k 1 h
s
þ
1 vector of stochastic dis-
turbances z k is assumed to have a multivariate
normal distribution, with zero mean value and
covariance matrix Q k ; while the scalar measure-
ment noise j k is assumed to have zero mean and
variance
In theory, because the model is linear and the
stochastic disturbances z k and j k are assumed to
benormallydistributedrandomvariables, theerror
on the estimate of the state x k is also normally
distributedand, in theaboveKFequations, P k is the
error covariance matrix associated with x k . The
subscript notation k
k , which is made a function of the
sampling index k in order to allow for heterosce-
dasticity in the observational errors. In general, Q k
is also assumed to change over time; as we shall
see, however, in the practical example considered
later, it is assumed that Q k ¼
s
1 denotes the estimate at
the k th sampling instant (normally anhour or a day
in the present hydrological context), based on the
estimate at the previous (k
j
k
Q
8
k.
1) th sampling instant.
The estimate y k of the output variable y k is
obtainedas the following linear functionof thestate
estimates and any instantaneous input effect, by
referencetotheobservationequation(Equation9.3):
The Kalman Filter Forecasting Algorithm
The linear, Gaussian, state space model (Equa-
tion 9.3) discussed in the previous section provides
the basis for the implementation of the KF state
estimation and forecasting algorithm. This algo-
rithmhas become very famous since Kalman pub-
lished his seminal paper in 1960, with reported
applications in almost all areas of science, engi-
neering and social science. It is unlikely, however,
that many users have actually read this paper. For
example, while the KF can be usefully interpreted
in Bayesian estimation terms and, indeed, seems
the very embodiment of Bayesian estimation, it
was actually derived using orthogonal projection
theory and so does not rely upon Gaussian as-
sumptions. For this and other reasons, it has prov-
en to be a very robust algorithm that is ideal for
practical applications. The KF algorithm is best
written (Young 1984) in the following prediction-
correction form, under the assumption that the
system (Equation 9.3) is stochastically observable:
A priori prediction:
h T
^
y k ¼
x k þ
g i u k d
ð 9 : 4
The f-step-ahead forecasts of the output variable
y k þ f j k are obtained by simply repeating the predic-
tion f times, without correction (since no new data
over this interval are available). It is straightforward
to show that the f-step-ahead forecast variance is
then given by:
var ^
y k þ f j k ¼ s
k
h T P k þ f j k h
þ
ð 9 : 4
where P k þ f j k is the error covariance matrix esti-
mate associated with the f-step-ahead prediction of
the state estimates. This estimate of the f-step-
ahead prediction variance is used to derive approx-
imate 95% confidence bounds for the forecasts,
under the approximating assumption that the pre-
diction error can be characterized as a nonstation-
ary Gaussian process (i.e. twice the square root of
the variance at each time step is used to define the
95% confidence region). The derivation of the
above KF equations is not difficult but outside the
scope of this chapter. However, the interested read-
er can find this derivation in any tutorial text on
estimation theory, such as Maybeck (1979),
Young (1984) or Norton (1986). The derivation is
also sketched out in the FRMRC 'User Focussed
Measurable Outcome' Report UR5
x k j k 1 ¼
F k x k 1 þ
G k u k d1
ð 9 : 4
F k P k 1 F k þ
P k j k 1 ¼
Q k
ð 9 : 4
A posteriori correction:
h
i 1
h T P k j k 1 h
k
x k ¼ x k j k 1 þ
s
þ
P k j k 1 h
n
o
(Young
h T
x k j k 1 g i u k d
y k
ð 9 : 4
et al. 2006).
Search WWH ::




Custom Search