Information Technology Reference
In-Depth Information
Table 7.3 Time and Space Complexity for SEA Dataset
Complexity
Normal
SMOTE
UB
SERA
REA
Training time
complexity (s)
2 . 632
4 . 092
7 . 668
3 . 952
5 . 356
Testing time
complexity (s)
0 . 088
0 . 148
0 . 236
0 . 156
1 . 904
Space complexity (kb)
1142
1185
1362
1266
1633
Table 7.4 Time and Space Complexity for ELEC Dataset
Complexity
Normal
SMOTE
UB
SERA
REA
Training time
complexity (s)
1 . 376
1 . 712
2 . 844
1 . 824
2 . 028
Testing time
complexity (s)
0 . 084
0 . 148
0 . 244
0 . 14
1 . 744
Space complexity (kb)
129
142
155
151
248
Table 7.5 Time and Space Complexity for SHP Dataset
Complexity
Normal
SMOTE
UB
SERA
REA
Training time
complexity (s)
10 . 29
17 . 89
60 . 10
16 . 98
21 . 62
Testing time
complexity (s)
0 . 23
0 . 46
0 . 53
0 . 43
11 . 58
Space complexity
(kb)
6881
7061
7329
9858
10,274
Considering the empirical performance of all algorithms on various datasets,
it could be concluded that SERA has the best performance-complexity trade-off.
That being said, as the consumption of time and space of REA is not significantly
larger than that of SERA, applications that do not place strong requirements on
speed and limitations on computational resources should as well consider using
REA because of its exceptional learning performance.
7.5 CONCLUSION
This chapter reviews algorithms for learning from nonstationary data streams with
imbalanced class distribution. All these algorithms basically manage to increase
the number of minority class data within the data chunk under consideration to
compensate the imbalanced class ratio, which can be categorized into the ones
using an over-sampling technique to create synthetic minority class instances,
Search WWH ::




Custom Search