Digital Signal Processing Reference
In-Depth Information
nondeterministic methods must be used. As most signals have random com-
ponents, probability-based models form a powerful set of modeling methods.
Accordingly, signal processing methods have deep roots in statistical estima-
tion theory.
Consider a set of N observation samples. In most applications, the observa-
tion samples are captured by a moving window centered at some position n ,
where we consider the general case of a vector index to account for multidi-
mensional signals. Such samples will be denoted as x [ n ]
,
x N [ n ]] T . For notational convenience, we drop the index n , unless necessary
for clarity.
Assume now that we model these samples as independent and identically
distributed (i.i.d.). Each observation sample is then characterized by the com-
mon probability density function (pdf) f
=
[ x 1 [ n ] ,x 2 [ n ] ,
...
β (
x
)
, where
β
is the mean, or location,
of the distribution. Often
is information-carrying and unknown, and thus
must be estimated. The ML estimate of the location is achieved by maximiz-
ing, with respect to
β
β
, the probability of observing x 1 ,x 2 ,
...
,x N . For i.i.d.
samples, this results in
N
ˆ
β =
arg max
β
f
β (
x i ).
(2.1)
i
=
1
Thus, the value of
β
that maximizes the product of the pdfs constitutes the
ML estimate.
The degree to which the ML estimate accurately represents the location
is, to a large extent, dependent on how accurately the model distribution
represents the true distribution of the observation process. To allow for a wide
range of sample distributions, the commonly assumed Gaussian distribution
can be generalized by allowing the exponential rate of tail decay to be a free
parameter. This results in the generalized Gaussian density function:
p
ce ( | x β | /σ )
f
β (
x
) =
,
(2.2)
where p governs the rate of tail decay, c
is the
Gamma function. This includes the standard Gaussian distribution as a spe-
cial case ( p
=
p
/(
2
σ(
1
/
p
))
, and
( · )
=
<
2, the tails decay slower than in the Gaussian
case, resulting in a heavier-tailed distribution. Of particular interest is the
case p
2). For p
=
1, which yields the double exponential, or Laplacian, distribution,
1
2
e −| x β | .
f β (
) =
x
(2.3)
σ
The ML criteria can be applied to optimally estimate the location of a set
of N samples distributed according to the generalized Gaussian distribution,
yielding
N
N
ˆ
ce ( | x β | /σ )
p
p
β =
arg max
β
=
arg min
β
1 |
x i
β |
.
(2.4)
i
=
1
i
=
Search WWH ::




Custom Search