Information Technology Reference
In-Depth Information
cult to determine the optimal number of hidden neurons. Too few
hidden neurons will result in under-
It is very dif
tting and there will be high training error and
statistical error due to the lack of enough adjustable parameters to map the input-
output relationship. Too many hidden neurons will result in over-
tting and high
variance. The network will tend to memorise the input-output relations and nor-
mally fail to generalize. Testing data or unseen data could not be mapped properly.
The number of hidden neurons is directly proportional to the system resources. The
bigger the number more the resources are required. The number of neurons in a
hidden layer was kept 80 by trial and error method for optimal results.
Each neuron in the neural network has a transformation function. To produce an
output, the neuron performs the transformation function on the weighted sum of its
inputs. Various activation functions used in neural networks are compet, hardlim,
logsig, poslin, purelin, radbas, satlin, softmax, tansig and tribas etc. The two
transfer functions normally used in MLP are logsig and tansig.
logsig transfer function is also known as Logistic Sigmoid:
1
log sig
ð
x
Þ¼
e x ;
1
þ
tansig transfer function is also known as Hyperbolic Tangent:
e x
e x
tan sig
ð
x
Þ¼
e x
e x
þ
The hyperbolic tangent and logistic sigmoid are related by:
tan sig
ð
x
Þþ
1
1
¼
2
1
þ e 2x
We get:
2
tan sig ð x Þ¼
e 2x
1
1
þ
These transfer functions are commonly used as they are easy to use mathe-
matically and while saturating, they are close to linear near the origin. The
'
activation function was used for the hidden and output layer neurons in the pro-
posed experiment.
The training of the network employing back propagation algorithm has been
done in neural network toolbox under MATLAB environment. The neural network
model employing back-propagation algorithm comes under the category of super-
vised learning. The ' tansig ' activation function has been used for the neurons of
both hidden and output layers.
The adaptive learning function
'
tansig
has been used in the neural network
training process. Mean Square Error (MSE) has been selected as a Cost Function in
'
traingdx
'
Search WWH ::




Custom Search