Information Technology Reference
In-Depth Information
and the output is a single neuron assuming the value 1 if lung cancer is
detected and 0 otherwise. All neurons have a sigmoidal function as activa-
tion function. The net has been trained using the resilient back-propagation
algorithm, based on the gradient descent approach, in which only the sign
of the derivative is used to determine the direction of the weights update.
This choice is due to the fact that this algorithm was able to offer the best
compromise between the error on the validation set and convergence. Finally,
we set the number of neurons in the hidden layer equal to three; this value
has been obtained by training a set of networks with increasing number of
hidden neurons and picking the smallest one with a good validation error.
Since ANN results depend on the values of the initialization, we trained the
net 20 times and we chose the best configuration (according to the early
stopping error) to evaluate the test set.
- Radial Basis Functions (RBF) : RBF is a method to approximate nonlinear
functions; they are feed-forward architectures consisting of a hidden layer of
radial kernels and an output layer of linear neurons. Each hidden neuron in
a RBF is tuned to respond to a local region of feature space by means of
a radially symmetric function such as the Gaussian one. The output units
linearly combine the hidden units to predict the output variable in a simi-
lar fashion to MLPs. After selecting the radial basis centers using c-means
clustering [10], the spreads are determined from the average distance be-
tween neighboring cluster centers or the sample covariance of each cluster.
Finally, the radial basis activations are used as regressors to predict the tar-
get outputs. There are several possible learning processes for a RBF network
depending on how the centers of the hidden layer are defined. Because of its
functioning, the different layers of a RBF network perform different tasks.
From a geometrical point of view, RBF network decision boundaries are
hyperellipsoids.
- Self-Organizing Map (SOM) : SOM or Kohonen maps are an unsupervised
learning algorithm that provides a method to represent multidimensional
data in a smaller dimensional space. In practice, they map the input vector
of dimension M into a smaller space of dimension N (with N
M ). They
are a specific topology of feed-forward neural network where each neural
unit is connected only to all input units and not to other units. The SOM
does not need any output target and can thus be considered as a clustering
algorithm for exploratory data analysis. The SOM consists of a regular,
typically two-dimensional grid of processing neurons, called map units, that
is iteratively trained. Each area reacts to a specific stimulus; the observation
of the activation patterns allow to infer some input properties. The accuracy
and the generalization capability of a SOM depend on the number of used
map units.
- Learning Vector Quantization (LVQ) : LVQ is a supervised variant of SOM
and can be considered as a special case of artificial neural network where the
winner-takes-all Hebbian learning-based approach is applied. The network
has three layers: an input layer, a Kohonen classification layer, and a com-
petitive output layer. After the network is given by prototypes, it adapts the
 
Search WWH ::




Custom Search