Hardware Reference
In-Depth Information
0.7
0.7
"box-cox=log"
"box-cox=0.5"
"box-cox=1"
0.6
0.6
0.5
0.5
"box-cox=log"
"box-cox=0.5"
"box-cox=1"
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
100
200
300
400
500
600
700
800
900
1000
100
200
300
400
500
600
700
800
900
1000
a
b
Training set size
Training set size
0.7
0.045
"box-cox=log"
"box-cox=0.5"
"box-cox=1"
"box-cox=log"
"box-cox=0.5"
"box-cox=1"
0.04
0.6
0.035
0.5
0.03
0.4
0.025
0.02
0.3
0.015
0.2
0.01
0.1
0.005
0
0
100
200
300
400
500
600
700
800
900
1000
100
200
300
400
500
600
700
800
900
1000
c
d
Training set size
Training set size
0.045
0.045
"box-cox=log"
"box-cox=0.5"
"box-cox=1"
"box-cox=log"
"box-cox=0.5"
"box-cox=1"
0.04
0.04
0.035
0.035
0.03
0.03
0.025
0.025
0.02
0.02
0.015
0.015
0.01
0.01
0.005
0.005
0
0
100
200
300
400
500
600
700
800
900
1000
100
200
300
400
500
600
700
800
900
1000
Training set size
Training set size
e
f
Fig. 8.10 Average normalized error versus training set size for the experimented RSMs. a Linear
regression, b Splines, c Radial basis functions, d Kriging, e Evolutionary design, f Neural networks
several days, while Neural Network took few hours), Splines and Linear regression
have been the fastest RSMs (the validation time for both of them was few minutes),
while Radial Basis Functions and Kriging presented an intermediate behavior (the
overall validation time was several minutes).
Overall, the results have showed that different models can present a trade-off
of accuracy versus computational effort. In fact, throughout the evaluation, we ob-
served that high accuracy models require high computational time (for both model
construction time and prediction time); vice-versa low model construction and pre-
diction time has led to low accuracy. We can sum up by observing that the best choice
 
Search WWH ::




Custom Search