Information Technology Reference
In-Depth Information
Table 4 Performance comparison of script recognition accuracy
Author
Classi er
Lexicon size
(in words)
Problem
domain
Recognition
rate (%)
Guillevic and
Suen ( 1998 )
HMM/KNN
30
LA words
(ENGLISH)
86.7
Chiang ( 1998 )
NN
100
USPS database
mail
87.4
Kim et al.
( 2000 )
HMM/MLP
32
LA words
92.2
Oliveira et al.
( 2002 )
MLP
12
Numerical
strings
87.2
Kundu and
Chen ( 2002 )
HMM
100
Postal words
88.2
Koch et al.
( 2004 )
MLP
1,000
Letters
67.8
G
nter and
Bunke ( 2004 )
ü
HMM + Ensembled
methods
IAM
71.58
G ü nter and
Bunke ( 2005 )
HMM + Ensembled
methods
IAM
75.61 - 82.28
Gatos et al.
( 2006a )
K-NN
3,799
IAM
81.05
Gatos et al.
( 2006b )
SVM
IAM
87.68
Tomoyuki
et al. ( 2007 )
Posterior probability
1,646
City names
(European
countries)
80.2
to 88 % which is slightly better than the accuracy achieved here in this work.
Summary of recognition performances of some off-line script recognition systems
in the same domain in chronological order year wise are shown in Table 4 .
11 Conclusion and Future Scope
The MLP used in the proposed experiment for the handwritten character recogni-
tion employing the backpropagation algorithm performed exceptionally well with
80 neurons in the hidden layer and
as the activation function for both
hidden and output layer neurons. While preparing the training samples, each
character image was resized to 15
'
tansig
'
1 column
matrix before applying it as an input to the neural network. The length of the feature
vector of each character is 180. Also, there are 26 output classes (a
×
12 and then reshaped into a 180
×
z) representing
each character. Hence, in the MLP structure used in the proposed experiment, the
number of neurons in the input and output layers has been
-
xed at 180 and 26
respectively.
Search WWH ::




Custom Search