Digital Signal Processing Reference
In-Depth Information
Chapter 2
Wiener Filtering
Abstract Before moving to the actual adaptive filtering problem, we need to solve
the optimum linear filtering problem (particularly, in the mean-square-error sense).
We start by explaining the analogy between linear estimation and linear optimum
filtering. We develop the principle of orthogonality, derive theWiener-Hopf equation
(whose solution lead to the optimum Wiener filter) and study the error surface.
Finally, we applied the Wiener filter to the problem of linear prediction (forward and
backward).
2.1 Optimal Linear Mean Square Estimation
Lets assume we have a set of samples
coming from a jointly wide
sense stationary (WSS) process with zero mean. Suppose now we want to find a
linear estimate of d
{
x
(
n
) }
and
{
d
(
n
) }
(
n
)
based on the L -most recent samples of x
(
n
)
, i.e.,
L
1
d
w T x
L
(
n
) =
(
n
) =
w l x
(
n
l
),
w
,
x
(
n
) ∈ R
and n
=
0
,
1
,...
(2.1)
l
=
0
The introduction of a particular criterion to quantify how well d
(
n
)
is estimated by
d
would influence how the coefficients w l will be computed. We propose to use
the Mean Squared Error (MSE), which is defined by
(
n
)
E
2
E
2
) d
J MSE (
w
) =
|
e
(
n
) |
=
|
d
(
n
(
n
) |
,
(2.2)
where E
is the estimation error. Then, the
estimation problem can be seen as finding the vector w that minimizes the cost
function J MSE (
[·]
is the expectation operator and e
(
n
)
w
)
. The solution to this problem is sometimes called the stochastic
Search WWH ::




Custom Search