Digital Signal Processing Reference
In-Depth Information
justified by Robbins and Monro in the context of providing iterative param-
eter estimation based on random observations [252].
The Robbins-Monro problem is closely related to the idea of finding a
set of optimal filter coefficients while a random signal is acquired. So the
LMS algorithm is in fact an application of the stochastic approximation
principle to the steepest-descent algorithm (which is in fact also known as
deterministic-gradient algorithm), using a stochastic estimation of the gra-
dient vector and a fixed step-size. This simple and extremely efficient idea
is historically attributed to Widrow and Hoff [303]. The stochastic approx-
imation employed in LMS is straightforward: it consists in replacing the
correlation matrix and the cross-correlation vector by instantaneous and
unbiased estimates. In doing so, we get
R
x T
=
x
(
n
)
(
n
)
(3.61)
and
p
ˆ
=
d
(
n
)
x
(
n
)
(3.62)
so that the stochastic gradient vector becomes
x
w
x T
J [ w
(
n
)
]
=
(
n
)
(
n
)
(
n
)
d
(
n
)
x
(
n
)
(3.63)
If we apply the above expressions in the steepest-descent algorithm, it
follows that
μ
p
Rw
w
(
n
+
1
) =
w
(
n
)
(
n
) −ˆ
μ x
w
x T
=
(
n
)
(
n
)
(
n
)
(
n
)
d
(
n
)
(
n
)
w
x
(3.64)
or rather,
w
(
n
+
1
) =
w
(
n
)
μ
[
y
(
n
)
d
(
n
) ]
x
(
n
)
=
w
(
n
) +
μ
[
d
(
n
)
y
(
n
) ]
x
(
n
)
(3.65)
which is the expression of the LMS algorithm.
Now, it is useful to revisit the previous example to illustrate the applica-
tion of the LMS and to establish some comparisons with the steepest-descent
algorithm.
Example 3.4 (Channel Equalization with the LMS Algorithm)
Let us return to the equalization problem studied in Example 3.3 . Now, we
will use μ =
0.1 and the same initial condition adopted for the LMS algorithm.
Search WWH ::




Custom Search