Information Technology Reference
In-Depth Information
8.3 Experimental Conditions
There is not any rule that gives the calculation for the ideal parameter setting for a
neural network. However, the various parameters and their respective values used
in the proposed training process of the handwritten character recognition experi-
ments are shown in Table 2 .
9 Implementation and Functional Details
In the current situation, the number of neurons in the input and output layers are
fixed at 180 and 26 respectively. The 180 input neurons are equivalent to the input
character
'
s size as we have resized every character into a binary matrix of size
15
1. The number of neurons in the
output layer is 26 because there are 26 English alphabets. The number of neurons in
the hidden layer and the activation functions of the neurons in the hidden and output
layers are to be decided.
12 and then reshaped to a matrix of size 180
×
×
Table 2 Experimental conditions during the recognition experiment
Parameters
Value
Input layer
No. of input neurons
180
Transfer/activation function
Linear
Hidden layer
No. of hidden neurons
80
Transfer/activation function
TanSig
Learning rule
Momentum
Output layer
No. of output neurons
26
Transfer/activation function
TanSig
Learning rule
Momentum
Learning constant
0.01
Acceptable error level (MSE)
0.001
Momentum term (
)
0.90
α
Maximum epochs
100,000
Termination conditions
Based on minimum mean square error or
maximum number of epochs allowed
Initial weights and biased term values
Randomly generated values between 0 and 1
Number of hidden layers
1
Search WWH ::




Custom Search