Information Technology Reference
In-Depth Information
Tabl e 1. The comparison results with averaged recognition rate and standard devi-
ation. Raw data represents using the original representation without preprocessing.
PCA + SVM represents applying PCA as preprocessing where the number is the pro-
jected dimensionality. BoW+SVM represents apply BoW model with SVM where d is
codebook size and w is the window size. tBoW+SVM is our proposed method.
Method
Raw Data PCA(100)+SVM BoW+SVM BoW+SVM tBoW+SVM
(w:20,d:32) (w:40,d:64) (w:20,d:32,4 segments)
Recognition rate 0.7410
0.7308
0.8277
0.8538
0.8819
(0.0152)
(0.0146)
(0.0148)
(0.0104)
(0.0107)
symmetric opening/closing trajectory. For the sake of the simplicity, we only use
the data from the Y-axis in our experiments. The Fig 5 shows some examples in
our door opening/closing trajectory dataset.
4.2 Experimental Results
In our experiments, we use linear support vector machine (SVM) as our classi-
fication model. To determine the appropriate parameter C in SVM, we perform
a grid search from C =
2 0 , 2 1 , 2 2 , ..., 2 9
with the cross-validation. For the eval-
uation, we compare our proposed method with different representations. One is
using the original representation without preprocessing. The other is applying
Principal Component Analysis (PCA) as the preprocessing. Note that we project
our data noto the 50 and 100 principal components in our experiments. For our
methods, we use k-means clustering algorithm to extract the codebook. For the
sizes of sliding window and codebook, we use two pairs in our experiments: (w,
d) = (20, 32) and (40, 64). For experiments, we randomly split 80% of the data
for training and the remaining 20% for testing. We repeat this process five times
and report the average error rates.
Table 1 shows the experimental results for the five different settings, including
ours. As observed, our method achieved improved error rates by comparing with
other methods. By comparing PCA with using raw data, we could see that PCA
perform worse than using raw data. The worse performance shows that it might
lose some local information while project the data note the principal space.
Comparing BoW with using raw data, the improved performance is significant.
These results confirmed that represent by the distribution of local patterns could
provide a more discriminative ability for time series data. While the standard
BoW improved the performance, our temporal BoW (tBoW) model achieved the
best performance. Comparing the results shown in Tabel 1, taking the temporal
information was able to improve the recognition performance. It is also worth
noting that the performance of using larger codebook and window sizes is better
while applying BoW to full sequence. This shows that 2 seconds (40 timestamps)
might be more suitable for a larger trajectory where one second(20 timestamps)
might be too short to present the local patterns. However, apply BoW model to
an interval, which contains only n timestamps, one second(20 timestamps) will
be more appropriate for the shortertrajectory.
{
}
 
Search WWH ::




Custom Search