Digital Signal Processing Reference
In-Depth Information
HL
LH
HH
HL
HL
LH
HH
LH
HH
HL
HL
LH
HH
LH
HH
HL
HL
LH
HH
LH
HH
HL
HL
LH
HH
LH
HH
FIGURE 10.14
The simplified characterization of HMT-3S.
HMM, HMT-3S, by grouping the three DWT subbands into one quadtree
structure. This grouping strategy is popular in the graphical modeling and
Bayesian network literatures, 17 and as discussed before, it was used to develop
an improved HMT, HMT-2, by grouping every two neighboring coefficients
into one node. It was shown that the more complete statistical characteriza-
tion of DWT from HMT-2 improves the signal denoising performance, as well
as the model training efficiency. We show the simplified characterization of
HMT-3S in Figure 10.14, where we see that HMT-3S has the same quadtree
structure as HMT, except for the number of coefficients in a node, i.e., the state
number. If we assume the hidden state number to be two for each wavelet
coefficient, there are eight states in a node of HMT-3S.* It is worth noting that
two-state GMMs are still used to characterize the DWT marginal statistics.
Thus HMT-3S is parameterized by
= p J
0 , 1 .
u,
v
j, j
B, j,b
θ
(
u
)
,
1 ,
σ
|
B
∈ B
,j
=
1 ,
...
,J ; u,
v =
0 ,
...
, 7; b
=
HMT-3S
The EM training algorithm 1 can be straightforwardly extended to the eight-
state HMT-3S. Similar to HMT, the HMT-3S model likelihood can be computed
* Because there are three coefficients in a node that follow three GMMs, respectively, the three-
dimensional Gaussian pdf is involved in HMT-3S. For simplicity, we assume the covariances of
the multivariate Gaussian pdf are zeros here.
Search WWH ::




Custom Search