Information Technology Reference
In-Depth Information
Table 2. The relation of performance with the number of reference patterns.
No. of clusters per class
10
20
30
40
50
180
Time(msec)
51.123 96.730 140.985 184.173 226.155 657.220
K-Means clustering stops when no cluster center does move anymore. In our
experiment, about 10% of the clustering cases did not converge to the stable
status. In such cases, we need another criteria for stopping the clustering process.
We used two criteria. First, we limited the number of iteration under 100. If a
clustering process does not stop until it iterates 100 times, the process is forced
to stop and the final cluster centers are considered to be the results. Second, we
put a threshold on the minimum distance of center movement. A cluster center
is flagged as has been moved only when the distance of the movement is greater
than the threshold.
4 Related Work
Leong et al. experimented with a very small number of reference patterns to
see how DTW performs on accelerometer-based gesture patterns, and showed
that DTW performed very well with over 90% recognition accuracy [3]. The
experiment is limited to a set of small number, 10, of very discriminative and
simple gestures under user-dependent cases only though, and a systematic way
of choosing good reference patterns was not given.
Liu et al. proposed an adaptation algorithm to limit the number of reference
patterns [4]. Their method initially assigns only two reference patterns per ges-
ture class. On receiving a new training sample, an evaluation is performed on
previously selected reference patterns as well as on the new training sample, and
one of them is discarded.
Bong et al. utilized K-Means clustering to partition reference patterns for
each gesture class into a small number of clusters and use the centroid of each
cluster as the reference patterns, which is very similar to our method. One major
difference is that Bong et al.'s K-Means clustering estimates the centroid of a
cluster by resizing reference patterns into an average length so that the patterns
become of a uniform length.
Chen et al. proposed a method to utilize DTW for fuzzy clustering. The use
of DTW is limited to calculate pairwise distances of input patterns to build a
distance matrix as an input to fuzzy clustering algorithm. DTW is not used
at runtime for classifying test patterns. Their specific algorithm runs in the
complexity of
(2 n
1) [1]. Our method is different in that we use DTW both
as a distance measure and as a tool for deriving cluster centers. Also, at runtime,
DTW-based simple 1-NN algorithm is enough for classifying test patterns.
Oates et al. introduced time series clustering methods that are based on DTW
and HMM. They utilized DTW not only as a distance measure but also as a
method for finding cluster centers. To find the center of a cluster, their algorithm
first selects a pattern that minimizes distance to all other patterns in the cluster.
O
Search WWH ::




Custom Search