Information Technology Reference
In-Depth Information
1
2 (
T
Σ 1
k
e
z
μ k )
(
z
μ k )
N (
z ;
μ k , Σ k )=
.
(1)
2
2
(
2
π )
| Σ k |
We restrict the covariance matrices
Σ k to be diagonal [8], which proves to be effec-
tive and computationally economical.
Second, an image-specific GMM is adapted from the global GMM, using the fea-
ture vectors in the particular image. This is preferred to direct seperate estimation
of image-specific GMMs for the following reasons. 1) It improves robust parameter
estimation of the image specialized GMM, using the comparatively small number
of feature vectors in the single image. 2) The global GMM learnt from all train-
ing images may provide useful information for the image specialized GMM. 3) As
mentioned earlier, it establishes correspondence between Gaussian components in
different images-specifc GMMs. For robust estimation, we only adapt the mean vec-
tors of the global GMM and retain the mixture weights and covariance matrices. In
particular, we adapt an image-specific GMM by Maximum a Posteriori (MAP) with
the weighting all on the adaptation data. The posterior probabilities and the updated
means are estimated as
global
k
w k N (
, Σ k )
z j ;
μ
Pr
(
k
|
z j )=
,
(2)
global
k
K
k = 1 w k N (
z j ;
μ
, Σ k )
H
j = 1 Pr ( k | z j ) z j ,
1
n k
μ k =
(3)
where n k is a normalizing term,
H
j = 1 Pr ( k | z j ) ,
n k =
(4)
and Z
are the feature vectors extracted from the particular image.
As shown in Equation 2, the image-specific GMMs leverage statistical mem-
bership of each feature vector among multiple Gaussian components. This sets the
Gaussianized vector representation apart from the histogram of keyword represen-
tation which originally requires hard membership in one keyword for each feature
vector. In addition, Equation 3 shows that the Gaussianized vector representation en-
codes additional information about the feature vectors statistically assigned to each
Gaussian component, via the means of the components.
Given the computational cost concern for many applications, another advantage
of using GMM to model feature vector distribution is that efficient approximation
exists for GMM that does not significantly degrade its effectiveness. For example,
we can prune out Gaussian components with very low weights in the adapted image-
specific GMMs. Another possibility is to eliminate the additions in Equation 3 that
involves very low priors in Equation 2. Neither of these approaches significantly
degrades GMM's capability to approximate a distribution [8].
= {
z 1
,...,
z H
}
 
Search WWH ::




Custom Search