Hardware Reference
In-Depth Information
non-linear relationships. In the context of the project, we exploited restricted cubic
splines (RCS) where the expressions associated with σ are third order polynomials.
Restricted cubic splines are such that the first and second order derivative at the knots
is the same for adjacent polynomials while they present a linear behavior on the tails .
As an example, a closed form expression of a RCS with three knots ( L
=
3) is
the following:
= b 0, j + b 1, j x j + b 2, j x j + b 3, j x j + b 4, j ( x j k 1 ) 3
+
σ 3 ( x j , b , k )
k 2 ) 3
k 3 ) 3
b 5, j ( x j
+
b 6, j ( x j
where k j are the knot points in the function domain.
To determine the number of knots, we started by considering that five knots or
fewer are generally sufficient for restricted cubic splines. While fewer knots may
be required for small data sets, with a large number of knots increases the risk of
over-fitting the data. In particular, we adopted the following policies for the selection
of the number of knots depending on the number of levels of the j -th parameter x j :
￿
If the number of levels is greater than 5, L
=
5.
￿
If the number of levels is smaller than 3, L
=
0 (the spline is a linear function of
the parameter).
￿
Otherwise, the number of knots is equal to the number of levels.
4.4.4
Neural Networks
For function approximation purposes, Feed-forward Neural Networks (also known
as Multilayer Perceptrons) are a very efficient and powerful tool. Feed-forward net-
works are organized in successive layers of neurons. The data flow is unidirectional:
the data pass from the first input layer to the last output layer, and are elaborated
incrementally by the intermediate hidden layers.
The model of each single neuron is straightforward:
f
b
n
n
=
u
w i x i +
b
y
=
f ( u )
=
w i x i +
(4.16)
i
=
1
i
=
1
The net input u is a linear combination of the input values x i : the weights w i and the
bias b (or firing threshold ) represent the free parameters of the model. The net input
is transformed by means of a transfer function f (or activation function )—that in
general is non linear—giving the neuron's output y . The behavior and the complexity
of the network are defined by the way its neurons are connected: so in general the
model ( x ) is a complex recursive function that is usually regarded as a black-box,
with no given explicit analytical expression.
It has been shown in [ 3 ] that Neural Networks (NN) with one single non-linear
hidden layer and a linear output layer are sufficient for representing any arbitrary (but
Search WWH ::




Custom Search