Databases Reference
In-Depth Information
8.6.2 Linear System Models
In the previous sections we have assumed that the information sequence
can be modeled
by a sequence of iid random variables. In practice most information sequences derived from
real sources such as speech will contain dependencies. In an ideal world we would characterize
these dependencies using the joint pdf of the sequence elements. In practice such an approach
is not feasible. Instead, we try to characterize the dependencies in terms of correlation between
samples. An intuitive and useful way of modeling the correlation between samples is to view
the information sequence as the output of a linear system governed by a difference equation
with an iid input. The structure of the linear system as reflected in the parameters of the
difference equation introduces the correlation we observe in the information sequence.
The information sequence can be modeled in the form of the following difference equation:
{
x n }
N
M
x n =
a i x n i +
b j n j + n
(87)
i
=
1
j
=
1
where
is a white noise sequence.
We will assume throughout this topic that we are dealing with real valued samples. Recall that
a zero-mean wide-sense-stationary noise sequence
{
x n }
are samples of the process we wish to model, and
{ n }
{ n }
is a sequence with an autocorrelation
function
σ
2
0
0 otherwise
for k
=
R (
k
) =
(88)
In digital signal-processing terminology, Equation ( 87 ) represents the output of a linear
discrete time invariant filter with N poles and M zeros. In the statistical literature, this model
is called an autoregressive moving average model of order (N,M), or an ARMA (N,M) model.
The autoregressive label is because of the first summation in Equation ( 87 ), while the second
summation gives us the moving average portion of the name.
If all of the b j were zero in Equation ( 87 ), only the autoregressive part of the ARMAmodel
would remain:
N
x n =
a i x n i + n
(89)
i =
1
This model is called an N th-order autoregressive model and is denoted by AR(N). In digital
signal-processing terminology, this is an all pole filter . The AR(N) model is themost popular of
all the linear models, especially in speech compression, where it arises as a natural consequence
of the speech production model. We will look at it a bit more closely.
First notice that for the AR(N) process, knowing all of the past history of the process gives
no more information than knowing the last N samples of the process; that is,
P
(
x n |
x n 1 ,
x n 2 ,...) =
P
(
x n |
x n 1 ,
x n 2 ,...,
x n N )
(90)
which means that the AR(N) process is a Markov model of order N .
The autocorrelation function of a process can tell us a lot about the sample-to-sample
behavior of a sequence. A slowly decaying autocorrelation function indicates a high sample-
to-sample correlation, while a fast decaying autocorrelation denotes low sample-to-sample
Search WWH ::




Custom Search