Information Technology Reference
In-Depth Information
v t
R 1 =
(
(
),
(
))
E
v
k
k
(1.54)
e t
R 12 =
(
(
),
(
))
E
v
k
k
(1.55)
e t
R 2 =
E
(
e
(
k
),
(
k
))
(1.56)
It is also assumed that the initial condition x(0) is Gaussian distributed with
m 0 =
E
(
x
(
0
))
(1.57)
E (
t
R
(
0
) =
x
(
0
)
m 0 )(
x
(
0
)
m 0 )
(1.58)
where E(.) is the expectation operator. It is supposed that
x
ˆ
(
k
/
k
1
),
u
(
k
)
and y
(
k
)
are known and the objective is to estimate
. The prediction problem can be
improved by introducing the difference between the measured and estimated outputs,
x
ˆ
(
k
+
1
/
k
)
y
) as a feedback gain:
(
k
)
C
x
ˆ
(
k
/
k
1
x
ˆ
(
k
+
1
/
k
) = ˆ
x
(
k
/
k
1
) +
u
(
k
) +
K
(
k
)(
y
(
k
)
C
x
ˆ
(
k
/
k
1
))
(1.59)
The resultant prediction error is the difference between the state of the real system
and the estimated one which can be stated as the following:
ε(
k
+
1
) =
x
(
k
+
1
) −ˆ
x
(
k
+
1
/
k
)
(1.60)
(
)
(
)
It should be observed that as above mentioned Gaussian errors v
k
and e
k
are
with zero mean, it can be verified that:
ε(
k
+
1
) = (
K
(
k
)
C
)ε(
k
)
(1.61)
Thus,
) ⇒ˆ
m 0 ⇒∀
ε(
0
x
(
0
) =
k
>
0
ε(
k
) =
0
( ˆ
x
(
k
) =
m k )
(1.62)
And if the dynamics of ( 1.61 ) is stable, then:
x
(
0
)
lim
→∞ ¯
e
(
k
) =
0
lim
→∞ ˆ
x
(
k
) =
m k
(1.63)
k
k
The secondary objective is to minimize the covariance matrix which is denoted
as P(k),
t
P
(
k
) =
E
((ε ε).(ε ε)
)
(1.64)
in the sense that it approaches its minimum for:
t P
n
min
(
k
)α) α
(1.65)
The algorithm of Kalman filter can be summarized by the following iterative
process:
Search WWH ::




Custom Search