Digital Signal Processing Reference
In-Depth Information
For an ergodic process, we can easily obtain a similar empirical estimate for
the covariance: we replace the N trials with N successive samples of the
process so that (assuming zero mean):
l g r , y i d . , © , L s
N
1
N
ˆ
r X
[
n
]=
X
[
i
]
X
[
i
n
]
i
=
1
The empirical autocorrelation has the formof an inner product between two
displaced copies of the same sequence. We saw in Section 5.2 that this rep-
resents a measure of self-similarity. In white processes the samples are so
independent that for even the smallest displacement ( n
1) the process is
totally “self-dissimilar”. Consider now a process with a lowpass power spec-
trum; this means that the autocorrelation varies slowly with the index and
we can deduce that the process possesses a long-range self-similarity, i.e. it
is smooth. Similarly, a highpass power spectrum implies a jumping auto-
correlation, i.e. a process whose self-similarity varies in time.
=
Example 8.2: Top secret filters
Stochastic processes are a fundamental tool in adaptive signal processing ,a
more advanced type of signal processing in which the system changes over
time to better match the input in the pursuit of a given goal. A typical exam-
ple is audio coding, for instance, in which the signal is compressed by algo-
rithms which aremodified as a function of the type of content. Another pro-
totypical application is denoising , in which we try to remove spurious noise
from a signal. Since these systems are adaptive, they are best described as a
function of a probabilistic model for the input.
In this topic stochastic processes will be used mainly as a tool to study the
effects of quantization (hence the rather succinct treatment heretofore). We
can nonetheless try and “get a taste” of adaptive processing by considering
one of the fundamental results in the art of denoising, namely the Wiener
filter. This filter was developed by Norbert Wiener during World War II as
a tool to smooth out the tracked trajectories of enemy airplanes and aim
the antiaircraft guns at the most likely point the target would pass by next.
Because of this sensitive application, the theory behind the filter remained
classified information until well after the end of the war.
The problem, in its essence, is shown in Figure 8.2 and it begins with a signal
corrupted by additive noise:
X
[
n
]=
S
[
n
]+
W
[
n
]
both the clean signal S
are assumed to be zero-mean
stationary processes; assume further that they are jointly stationary and in-
[
n
]
and the noise W
[
n
]
Search WWH ::




Custom Search