Database Reference
In-Depth Information
Algorithm 6
hard-moVMF
Require:
Set
d
−
1
X
of data points on
S
Ensure:
A disjoint
k
-partitioning of
X
Initialize all
α
h
,μ
h
,κ
h
,h
=1
,
···
,k
repeat
{
The Hardened E (Expectation) step of EM
}
for
i
=1to
n
do
for
h
=1to
k
do
f
h
(
x
i
|
c
d
(
κ
h
)
e
κ
h
μ
h
x
i
θ
h
)
←
⎧
⎨
1
,
if
h
=argmax
h
α
h
f
h
(
x
i
|
θ
h
)
q
(
h
|
x
i
,
Θ)
←
⎩
0
,
otherwise
.
{
The M (Maximization) step of EM
}
for
h
=1to
k
do
α
h
←
n
i
=1
q
(
h
1
|
x
i
,
Θ)
μ
h
←
i
=1
x
i
q
(
h
|
x
i
,
Θ)
r
←
μ
h
/
(
nα
h
)
μ
h
←
μ
h
/
μ
h
rd−r
3
1
κ
h
←
r
2
until
convergence.
−
In addition to the above mentioned algorithms, we report experimental
results on another algorithm
fskmeans
(6) that belongs to the same class in
the sense that, like
spkmeans
, it can be derived from the mixture of vMF
models with some restrictive assumptions. In
fskmeans
, the centroids of
the mixture components are estimated as in
hard-movMF
.The
κ
value for
acomponentis
explicitly set
to be inversely proportional to the number of
points in the cluster corresponding to that component. This explicit choice
simulates a frequency sensitive competitive learning that implicitly prevents
the formation of null clusters, a well-known problem in regular kmeans (14).
6.7 Experimental Results
We now offer some experimental validation to assess the quality of clustering
results achieved by our algorithms. We compare the following four algorithms
on several datasets.
1. Spherical KMeans (20)—
spkmeans
.
2. Frequency Sensitive Spherical KMeans (6)—
fskmeans
.
3. moVMF based clustering using hard assignments—
hard-moVMF
.
Search WWH ::
Custom Search