Hardware Reference
In-Depth Information
-
Variogram Type: Gaussian ,
-
Autofitting Type: Maximizing Likelihood ,
￿
Evolutionary Design
-
Crossover Depth: 10 ,
-
Generations Number 1000 ,
-
Population Size: 500 ,
￿
Neural Networks
-
Network Size Policy: Automatic .
To enable the validation of the proposed RSMs, all the simulations associated with
the complete design space (1,134 design points) have been performed. The resulting
data have been used to train the RSMs and to validate the predictions. The chosen
sizes for the training sets are: 100, 200, 300, 400, 500, 600, 700, 800, 900 and 1,000
design space points. For each training set size, 5 training sets have been generated
with a pseudo-random process from the simulation data, hence the total number
of training sets is 50. Given a trained RSM, the average normalized error of the
predictions over the complete design space (of 1,134 points) is computed. For each
RSM, the three values of the Box-Cox transform λ
={
0, 0 . 5, 1
}
were evaluated.
Overall, for each RSM, 150 validation tests were performed.
Figure 8.10 reports ε versus the training set size, for all the experimented RSMs,
where:
η 1 +
η 2 +
η 3 +
η 4 +
η 5
ε
=
(8.1)
5
and η i , with 1
5, is the average normalized error corresponding to the error
observed training the RSM with the i th training set of the corresponding size. In
Fig. 8.10 , ε is the average normalized error on the y -axis, while the x -axis shows
the training set size.
Linear Regression, Radial Basis Functions and Splines have been plotted with
a scale from 0 to 0.7 (equivalent to 70% error); Kriging, Neural Network and
Evolutionary Design have a scale ranging from 0 to 0.045 (equivalent to 4.5% error).
Overall, there is evidence that the Neural Network (Fig. 8.10 f) allows for the best
error for a given training set size (less than 0.2% error after 100 training samples). On
the other hand, the Linear Regression RSM seems to be not appropriate for modeling
such type of use case (Fig. 8.10 a) since it provides the highest error while not scaling
with the training set size.
As can be seen, the logarithmic box-cox transformation is crucial for reducing
the variance of the data and improves the model prediction. We further analyze the
behavior for this particular Box-Cox transform in Fig. 8.11 . The Figure shows the
statistical behavior (computed over 5 runs on the same training set size for each of the
RSMs). As can be seen, Evolutionary Design, Kriging and Neural Network provide
a strong improvement with respect to Linear Regression and Splines (their scale has
been reformatted accordingly), while Radial Basis Functions are overall halfway.
However, for the ICT use case, Evolutionary Design and Neural Network have
revealed the highest execution times (the validation for Evolutionary Design lasted for
i
Search WWH ::




Custom Search