Digital Signal Processing Reference
In-Depth Information
scale, as shown in Figure 10.3c, where we can see that the HMT is able to cap-
ture the underlying interscale dependencies between parent and child state
variables, which the second-order statistics cannot provide. In HMT, each co-
efficient
W
j,i
is conditionally independent of all other random variables given
its state
S
j,i
. Thus, an
M
-state HMT is parameterized by
•
p
S
J
(
m
)
: The pmf of the root node
S
J
with
m
=
0
,
1
,
...
,M
−
1,
m,n
j, j
•
: The transition probability that
S
j,i
is in state
m
given that
S
j
+
1
,
i
/
2
=
p
S
j
|
S
j
+
1
(
m
|
S
j
+
1
,
i
/
2
=
n
)
+
1
is in state
n
,
j
=
1
,
...
,J
−
1 and
m, n
=
0
,
1
,
...
,M
−
1,
•
µ
j,m
: The mean and variance, respectively, of
W
j,i
given
that
S
j,i
is in state
m
,
j
j,m
and
γ
=
1
,
...
,J
and
m
=
0
,
1
,
...
,M
−
1.
These parameters can be grouped into a model parameter vector
θ
as
θ
=
p
S
J
(
1
.
m,n
j, j
2
j,m
)
µ
γ
|
=
...
=
...
−
m
,
1
,
j,m
,
j
1
,
,J
;
n, m
0
,
,M
(10.4)
+
The accurate estimation of HMT model parameters is essential to its practi-
cal applications, which can be effectively approached by the iterative expecta-
tion maximization (EM) algorithm.
12
This algorithm is known to numerically
approximate maximum likelihood estimates for mixture-density problems.
The EM algorithm has a basic structure and the implementation steps are
problem dependent. The EM algorithm for HMT model training is presented
briefly here, and we refer the reader to Reference 1 for more details. In the case
of the HMT model training using the EM algorithm, we try to fit an
M
-state
HMT model
defined in Equation 10.4 to the observed
J
-scale tree-structured
DWT, i.e.,
w
. The iterative structure is shown as follows:
θ
•
0
, and iteration
Step 1. Initialization:
Set an initial model estimate
θ
counter
l
=
0.
•
l
Step 2. E step:
Calculate
p
, which is the joint pmf for
the hidden state variables and is used in the maximization of
E
S
[ln
f
(
S
|
w
,
θ
)
l
].
(
w
,
S
|
θ)
|
w
,
θ
•
l
+
1
l
].
Step 3. M step:
Set
θ
=
arg max
θ
E
S
[ln
f
(
w
,
S
|
θ)
|
w
,
θ
•
Step 4. Iteration:
Set
l
=
l
+
1. If it converges, then stop; otherwise,
return to Step 2.
The wavelet-domain HMMs have been applied to signal estimation, de-
tection, and synthesis.
1
,
13
Specifically, an “empirical” Bayesian approach was
developed to denoise a signal corrupted by additive white Gaussian noise
(AWGN). It was demonstrated that signal denoising using wavelet-domain
HMT outperformed other traditional wavelet-based signal denoising meth-
ods with well-preserved detailed structures. Given a noisy signal of AWGN
power
2
, the HMT model
is first obtained via EM training, during which we
can also estimate the posterior hidden-state probabilities
p
σ
θ
(
S
j,i
|
w
,
θ)
for each
Search WWH ::
Custom Search