Digital Signal Processing Reference
In-Depth Information
, and the wavelet representation can be written as
10
j, i
∈{
1
,
...
,J
}
N
j
−
1
N
J
−
1
J
s
(
t
)
=
u
J,i
φ
(
t
)
+
0
w
ψ
(
t
)
,
(10.1)
J,i
j,i
j,i
i
=
0
j
=
1
i
=
where
J
denotes the
scale
of analysis, and scale
J
indicates the coarsest scale
or lowest resolution of analysis.
N
j
2
j
=
N
/
is the number of coefficients at
scale
j
.
u
J,i
=
s
dt
is the scaling coefficient, which measures the lo-
cal mean around the time 2
J
i
.
(
t
)φ
J,i
(
t
)
w
j,i
=
s
dt
is the wavelet coefficient,
which characterizes the local variation around the time 2
j
i
and the frequency
2
j
f
0
. Because of the multiscale binary-tree structure, given a wavelet coeffi-
cient
(
t
)ψ
j,i
(
t
)
w
j,i
, its parent is
w
, where the operation
x
takes the integer part
j
+
1
,
i
/
2
of
x
, and its two children are
1
,asshown in Figure 10.3a.
In the following, we use
w
to denote the vector of all wavelet coefficients.
For most real-world signals and images, the set of wavelet coefficients is
sparse
. This means that the majority of the coefficients are small and only a
few coefficients contain most of the signal energy. Thus, the probability den-
sity function (pdf),
f
W
w
1
,
2
i
and
w
j
−
j
−
1
,
2
i
+
(w)
,ofthe wavelet coefficients
w
can be described by
w
=
a peak (centered at
0) and heavy-tailed non-Gaussian density, where
W
stands for the random variable of
w
.Itwas presented in Reference 11
that the Gaussian mixture model (GMM) can well approximate this non-
Gaussian density, as shown in Figure 10.3b. Therefore, we associate each
wavelet coefficient
w
with a set of discrete hidden states
S
=
0
,
1
,
...
,
M
−
1, which have probability mass functions (pmf),
p
S
(
m
)
. Given
S
=
m
, the
m
.Wecan
pdf of the coefficient
w
is Gaussian with mean
µ
m
and variance
σ
m
parameterize an
M
-state GMM by
π
={
p
S
(
m
)
,
µ
m
,
σ
|
m
=
0
,
1
,
...
,M
−
1
}
,
and the overall pdf of
w
is determined by
M
−
1
f
W
(w)
=
p
S
(
m
)
f
W
|
S
(w
|
S
=
m
)
,
(10.2)
m
=
0
where
g
w
m
.
2
1
2
exp
−
(w
−
µ
)
m
=
f
W
|
S
(w
|
S
=
m
)
=
;
µ
m
,
σ
(10.3)
2
σ
m
πσ
m
Although
w
is conditionally Gaussian given its state
S
=
m
,itisnot Gaussian
in general due to the randomness of the state variable
S
.
Although the orthogonal DWT can decorrelate an image with almost uncor-
related wavelet coefficients, it is widely understood that there is a consider-
able amount of high-order dependencies existing in
w
. This can be observed
from the characteristics of the wavelet coefficient distribution, such as
in-
trascale clustering
and
interscale persistence
,asshown in Figure 10.1. Therefore,
in Reference 1, a tree-structured hidden Markov tree (HMT) model was devel-
oped by connecting state variables of wavelet coefficients vertically across the
Search WWH ::
Custom Search