Digital Signal Processing Reference
In-Depth Information
5.5 Recursive Least Squares (RLS)
Consider Eq. ( 5.8 ), which will be rewritten as:
C n 1 C n 1 1
C n 1 d n 1 .
w
(
n
1
) =
(5.41)
In ( 5.41 ) we have explicitly denoted that the quantities in ( 5.3 ) are generated using n
input-output pairs available up to time n
1. 11 In practice it is interesting to deal with
the situation in which n increases. In some situations, it would be desirable to obtain
an LS solution for a given number of input-output pairs without waiting to have
them all. Each time a new pair is available, a new LS solution taking into account
the new input-output pair would be desirable. This situation would be the case when
input-output pairs are obtained in real-time, arriving in a sequential manner. The
calculation of the new LS solution would imply the implementation of ( 5.41 ) with
C n and d n . The main problem with this, is the need to obtain the inverse of C n C n .If
this matrix is of L
L , the computational cost of obtaining its inverse is O L 3 .If
the dimension L is large, it can be easily seen that the cost of obtaining the inversion
of an L
×
L matrix for every input-output pair received can be overwhelming. On
the other hand, we see that the vector d n and the matrix C n can be written as:
×
d n 1
d
C n 1
x T
d n =
,
C n =
.
(5.42)
(
n
)
(
n
)
These relations show us that in C n there is an important overlapping of information
with C n 1 . The natural question is: could this structure in C n and d n be exploited in
some smart way in order to reduce the aforementioned computational requirement?
The answer is affirmative as we will show next.
First, we consider the following weighted cost function for the LS problem 12 :
2
Λ
w
(
n
) =
arg min
w
L
d n
C n w
,
(5.43)
n
R
where
Λ n is an
(
n
+
1
) × (
n
+
1
)
diagonal matrix given by
diag
1
n
n
1
Λ n =
λ
λ
... λ
,
0
1
.
(5.44)
x T
Defining the error signal e
(
i
) =
d
(
i
)
(
i
)
w
,
i
=
0
,...,
n , we can write:
11 Notice that, in order to simplify the notation a little bit, we use w instead of
w which was used
ˆ
in the previous sections to denote the LS solution.
12 In the following, and without loss of generality we will concentrate on the RLS algorithm known
as Exponentially Weighted RLS (EWRLS), which generalizes the standard RLS algorithm (with
λ = 1) and it is by far the most used and popular version. Other important variant is the sliding
window RLS algorithm [ 11 ].
Search WWH ::




Custom Search